Test Report: KVM_Linux_crio 21794

                    
                      1ae3cc206fa1c5283cece957f99367f4350f676e:2025-10-25:42054
                    
                

Test fail (2/329)

Order failed test Duration
37 TestAddons/parallel/Ingress 158.01
243 TestPreload 137.27
x
+
TestAddons/parallel/Ingress (158.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-887867 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-887867 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-887867 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [3e817a8a-8811-45b8-9c82-daa462869b72] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [3e817a8a-8811-45b8-9c82-daa462869b72] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003750482s
I1025 08:59:45.782465  107766 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-887867 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.650005733s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-887867 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.204
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-887867 -n addons-887867
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-887867 logs -n 25: (1.396424175s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-633428                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-633428 │ jenkins │ v1.37.0 │ 25 Oct 25 08:55 UTC │ 25 Oct 25 08:55 UTC │
	│ start   │ --download-only -p binary-mirror-029542 --alsologtostderr --binary-mirror http://127.0.0.1:39567 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-029542 │ jenkins │ v1.37.0 │ 25 Oct 25 08:55 UTC │                     │
	│ delete  │ -p binary-mirror-029542                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-029542 │ jenkins │ v1.37.0 │ 25 Oct 25 08:55 UTC │ 25 Oct 25 08:55 UTC │
	│ addons  │ disable dashboard -p addons-887867                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:55 UTC │                     │
	│ addons  │ enable dashboard -p addons-887867                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:55 UTC │                     │
	│ start   │ -p addons-887867 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:55 UTC │ 25 Oct 25 08:58 UTC │
	│ addons  │ addons-887867 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:58 UTC │ 25 Oct 25 08:58 UTC │
	│ addons  │ addons-887867 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ addons  │ enable headlamp -p addons-887867 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ addons  │ addons-887867 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ addons  │ addons-887867 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ addons  │ addons-887867 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ addons  │ addons-887867 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ ip      │ addons-887867 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ addons  │ addons-887867 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ addons  │ addons-887867 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ ssh     │ addons-887867 ssh cat /opt/local-path-provisioner/pvc-11aeedcd-875b-4940-9537-fce0630e7a57_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ addons  │ addons-887867 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 09:00 UTC │
	│ addons  │ addons-887867 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-887867                                                                                                                                                                                                                                                                                                                                                                                         │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ addons  │ addons-887867 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │ 25 Oct 25 08:59 UTC │
	│ ssh     │ addons-887867 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 08:59 UTC │                     │
	│ addons  │ addons-887867 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 09:00 UTC │ 25 Oct 25 09:00 UTC │
	│ addons  │ addons-887867 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 09:00 UTC │ 25 Oct 25 09:00 UTC │
	│ ip      │ addons-887867 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-887867        │ jenkins │ v1.37.0 │ 25 Oct 25 09:01 UTC │ 25 Oct 25 09:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:55:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:55:28.422293  108440 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:55:28.422562  108440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:55:28.422572  108440 out.go:374] Setting ErrFile to fd 2...
	I1025 08:55:28.422579  108440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:55:28.422822  108440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
	I1025 08:55:28.423362  108440 out.go:368] Setting JSON to false
	I1025 08:55:28.424226  108440 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2269,"bootTime":1761380259,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:55:28.424326  108440 start.go:141] virtualization: kvm guest
	I1025 08:55:28.426058  108440 out.go:179] * [addons-887867] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 08:55:28.427477  108440 notify.go:220] Checking for updates...
	I1025 08:55:28.427690  108440 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 08:55:28.429403  108440 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:55:28.430979  108440 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-103842/kubeconfig
	I1025 08:55:28.433006  108440 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-103842/.minikube
	I1025 08:55:28.434445  108440 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 08:55:28.435732  108440 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:55:28.437538  108440 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:55:28.468432  108440 out.go:179] * Using the kvm2 driver based on user configuration
	I1025 08:55:28.469788  108440 start.go:305] selected driver: kvm2
	I1025 08:55:28.469823  108440 start.go:925] validating driver "kvm2" against <nil>
	I1025 08:55:28.469842  108440 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:55:28.470558  108440 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:55:28.470850  108440 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:55:28.470880  108440 cni.go:84] Creating CNI manager for ""
	I1025 08:55:28.470937  108440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 08:55:28.470948  108440 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 08:55:28.470999  108440 start.go:349] cluster config:
	{Name:addons-887867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-887867 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1025 08:55:28.471169  108440 iso.go:125] acquiring lock: {Name:mk13c1ce3bc6ed883268d1bbc558e3c5c7b2ab77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 08:55:28.472825  108440 out.go:179] * Starting "addons-887867" primary control-plane node in "addons-887867" cluster
	I1025 08:55:28.474309  108440 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:55:28.474356  108440 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-103842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 08:55:28.474370  108440 cache.go:58] Caching tarball of preloaded images
	I1025 08:55:28.474478  108440 preload.go:233] Found /home/jenkins/minikube-integration/21794-103842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 08:55:28.474491  108440 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 08:55:28.474832  108440 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/config.json ...
	I1025 08:55:28.474862  108440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/config.json: {Name:mkb596484eece3bedb7b4fb2f12d1b15cc5cf9ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:55:28.475022  108440 start.go:360] acquireMachinesLock for addons-887867: {Name:mkd4d80b8550b82ada790fb29b73ec76f8d8646f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 08:55:28.475088  108440 start.go:364] duration metric: took 49.495µs to acquireMachinesLock for "addons-887867"
	I1025 08:55:28.475114  108440 start.go:93] Provisioning new machine with config: &{Name:addons-887867 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-887867 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:55:28.475188  108440 start.go:125] createHost starting for "" (driver="kvm2")
	I1025 08:55:28.476919  108440 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1025 08:55:28.477091  108440 start.go:159] libmachine.API.Create for "addons-887867" (driver="kvm2")
	I1025 08:55:28.477126  108440 client.go:168] LocalClient.Create starting
	I1025 08:55:28.477216  108440 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca.pem
	I1025 08:55:28.758756  108440 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/cert.pem
	I1025 08:55:28.998876  108440 main.go:141] libmachine: creating domain...
	I1025 08:55:28.998900  108440 main.go:141] libmachine: creating network...
	I1025 08:55:29.000355  108440 main.go:141] libmachine: found existing default network
	I1025 08:55:29.000605  108440 main.go:141] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 08:55:29.001318  108440 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00268e260}
	I1025 08:55:29.001441  108440 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-887867</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 08:55:29.007887  108440 main.go:141] libmachine: creating private network mk-addons-887867 192.168.39.0/24...
	I1025 08:55:29.076218  108440 main.go:141] libmachine: private network mk-addons-887867 192.168.39.0/24 created
	I1025 08:55:29.076564  108440 main.go:141] libmachine: <network>
	  <name>mk-addons-887867</name>
	  <uuid>7f271aaf-fbb2-4788-b0f6-941c640faeb0</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:8f:48:bb'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 08:55:29.076602  108440 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867 ...
	I1025 08:55:29.076633  108440 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21794-103842/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1025 08:55:29.076644  108440 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21794-103842/.minikube
	I1025 08:55:29.076724  108440 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21794-103842/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21794-103842/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1025 08:55:29.358098  108440 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa...
	I1025 08:55:29.870920  108440 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/addons-887867.rawdisk...
	I1025 08:55:29.870992  108440 main.go:141] libmachine: Writing magic tar header
	I1025 08:55:29.871018  108440 main.go:141] libmachine: Writing SSH key tar header
	I1025 08:55:29.871088  108440 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867 ...
	I1025 08:55:29.871157  108440 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867
	I1025 08:55:29.871183  108440 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867 (perms=drwx------)
	I1025 08:55:29.871198  108440 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21794-103842/.minikube/machines
	I1025 08:55:29.871208  108440 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21794-103842/.minikube/machines (perms=drwxr-xr-x)
	I1025 08:55:29.871222  108440 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21794-103842/.minikube
	I1025 08:55:29.871233  108440 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21794-103842/.minikube (perms=drwxr-xr-x)
	I1025 08:55:29.871241  108440 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21794-103842
	I1025 08:55:29.871252  108440 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21794-103842 (perms=drwxrwxr-x)
	I1025 08:55:29.871262  108440 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1025 08:55:29.871270  108440 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1025 08:55:29.871279  108440 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1025 08:55:29.871289  108440 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1025 08:55:29.871298  108440 main.go:141] libmachine: checking permissions on dir: /home
	I1025 08:55:29.871307  108440 main.go:141] libmachine: skipping /home - not owner
	I1025 08:55:29.871313  108440 main.go:141] libmachine: defining domain...
	I1025 08:55:29.872597  108440 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-887867</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/addons-887867.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-887867'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1025 08:55:29.880497  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:5d:44:bd in network default
	I1025 08:55:29.881056  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:29.881077  108440 main.go:141] libmachine: starting domain...
	I1025 08:55:29.881082  108440 main.go:141] libmachine: ensuring networks are active...
	I1025 08:55:29.881987  108440 main.go:141] libmachine: Ensuring network default is active
	I1025 08:55:29.882369  108440 main.go:141] libmachine: Ensuring network mk-addons-887867 is active
	I1025 08:55:29.883035  108440 main.go:141] libmachine: getting domain XML...
	I1025 08:55:29.884264  108440 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-887867</name>
	  <uuid>9cdc6478-2b31-4b99-80de-837fd60d2fae</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/addons-887867.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:3c:42:92'/>
	      <source network='mk-addons-887867'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:5d:44:bd'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1025 08:55:31.193062  108440 main.go:141] libmachine: waiting for domain to start...
	I1025 08:55:31.194606  108440 main.go:141] libmachine: domain is now running
	I1025 08:55:31.194624  108440 main.go:141] libmachine: waiting for IP...
	I1025 08:55:31.195395  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:31.195855  108440 main.go:141] libmachine: no network interface addresses found for domain addons-887867 (source=lease)
	I1025 08:55:31.195869  108440 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:55:31.196193  108440 main.go:141] libmachine: unable to find current IP address of domain addons-887867 in network mk-addons-887867 (interfaces detected: [])
	I1025 08:55:31.196242  108440 retry.go:31] will retry after 201.298501ms: waiting for domain to come up
	I1025 08:55:31.399803  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:31.400356  108440 main.go:141] libmachine: no network interface addresses found for domain addons-887867 (source=lease)
	I1025 08:55:31.400372  108440 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:55:31.400625  108440 main.go:141] libmachine: unable to find current IP address of domain addons-887867 in network mk-addons-887867 (interfaces detected: [])
	I1025 08:55:31.400694  108440 retry.go:31] will retry after 317.320013ms: waiting for domain to come up
	I1025 08:55:31.719449  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:31.720215  108440 main.go:141] libmachine: no network interface addresses found for domain addons-887867 (source=lease)
	I1025 08:55:31.720238  108440 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:55:31.720568  108440 main.go:141] libmachine: unable to find current IP address of domain addons-887867 in network mk-addons-887867 (interfaces detected: [])
	I1025 08:55:31.720618  108440 retry.go:31] will retry after 314.300861ms: waiting for domain to come up
	I1025 08:55:32.036298  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:32.036881  108440 main.go:141] libmachine: no network interface addresses found for domain addons-887867 (source=lease)
	I1025 08:55:32.036904  108440 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:55:32.037234  108440 main.go:141] libmachine: unable to find current IP address of domain addons-887867 in network mk-addons-887867 (interfaces detected: [])
	I1025 08:55:32.037279  108440 retry.go:31] will retry after 543.761441ms: waiting for domain to come up
	I1025 08:55:32.583136  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:32.583759  108440 main.go:141] libmachine: no network interface addresses found for domain addons-887867 (source=lease)
	I1025 08:55:32.583804  108440 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:55:32.584216  108440 main.go:141] libmachine: unable to find current IP address of domain addons-887867 in network mk-addons-887867 (interfaces detected: [])
	I1025 08:55:32.584259  108440 retry.go:31] will retry after 533.904352ms: waiting for domain to come up
	I1025 08:55:33.119981  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:33.120455  108440 main.go:141] libmachine: no network interface addresses found for domain addons-887867 (source=lease)
	I1025 08:55:33.120470  108440 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:55:33.120744  108440 main.go:141] libmachine: unable to find current IP address of domain addons-887867 in network mk-addons-887867 (interfaces detected: [])
	I1025 08:55:33.120804  108440 retry.go:31] will retry after 751.52992ms: waiting for domain to come up
	I1025 08:55:33.874406  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:33.875075  108440 main.go:141] libmachine: no network interface addresses found for domain addons-887867 (source=lease)
	I1025 08:55:33.875098  108440 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:55:33.875437  108440 main.go:141] libmachine: unable to find current IP address of domain addons-887867 in network mk-addons-887867 (interfaces detected: [])
	I1025 08:55:33.875487  108440 retry.go:31] will retry after 1.123364201s: waiting for domain to come up
	I1025 08:55:35.000803  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:35.001368  108440 main.go:141] libmachine: no network interface addresses found for domain addons-887867 (source=lease)
	I1025 08:55:35.001385  108440 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:55:35.001669  108440 main.go:141] libmachine: unable to find current IP address of domain addons-887867 in network mk-addons-887867 (interfaces detected: [])
	I1025 08:55:35.001704  108440 retry.go:31] will retry after 909.309259ms: waiting for domain to come up
	I1025 08:55:35.912992  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:35.913540  108440 main.go:141] libmachine: no network interface addresses found for domain addons-887867 (source=lease)
	I1025 08:55:35.913557  108440 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:55:35.913822  108440 main.go:141] libmachine: unable to find current IP address of domain addons-887867 in network mk-addons-887867 (interfaces detected: [])
	I1025 08:55:35.913873  108440 retry.go:31] will retry after 1.322669322s: waiting for domain to come up
	I1025 08:55:37.237799  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:37.238295  108440 main.go:141] libmachine: no network interface addresses found for domain addons-887867 (source=lease)
	I1025 08:55:37.238312  108440 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:55:37.238591  108440 main.go:141] libmachine: unable to find current IP address of domain addons-887867 in network mk-addons-887867 (interfaces detected: [])
	I1025 08:55:37.238654  108440 retry.go:31] will retry after 1.562598496s: waiting for domain to come up
	I1025 08:55:38.802787  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:38.803483  108440 main.go:141] libmachine: no network interface addresses found for domain addons-887867 (source=lease)
	I1025 08:55:38.803500  108440 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:55:38.803910  108440 main.go:141] libmachine: unable to find current IP address of domain addons-887867 in network mk-addons-887867 (interfaces detected: [])
	I1025 08:55:38.803951  108440 retry.go:31] will retry after 2.739993951s: waiting for domain to come up
	I1025 08:55:41.547259  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:41.547864  108440 main.go:141] libmachine: no network interface addresses found for domain addons-887867 (source=lease)
	I1025 08:55:41.547885  108440 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:55:41.548164  108440 main.go:141] libmachine: unable to find current IP address of domain addons-887867 in network mk-addons-887867 (interfaces detected: [])
	I1025 08:55:41.548204  108440 retry.go:31] will retry after 2.393568648s: waiting for domain to come up
	I1025 08:55:43.943601  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:43.944151  108440 main.go:141] libmachine: no network interface addresses found for domain addons-887867 (source=lease)
	I1025 08:55:43.944174  108440 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:55:43.944520  108440 main.go:141] libmachine: unable to find current IP address of domain addons-887867 in network mk-addons-887867 (interfaces detected: [])
	I1025 08:55:43.944561  108440 retry.go:31] will retry after 4.451765264s: waiting for domain to come up
	I1025 08:55:48.401408  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:48.402094  108440 main.go:141] libmachine: domain addons-887867 has current primary IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:48.402115  108440 main.go:141] libmachine: found domain IP: 192.168.39.204
	I1025 08:55:48.402130  108440 main.go:141] libmachine: reserving static IP address...
	I1025 08:55:48.402460  108440 main.go:141] libmachine: unable to find host DHCP lease matching {name: "addons-887867", mac: "52:54:00:3c:42:92", ip: "192.168.39.204"} in network mk-addons-887867
	I1025 08:55:48.589998  108440 main.go:141] libmachine: reserved static IP address 192.168.39.204 for domain addons-887867
	I1025 08:55:48.590022  108440 main.go:141] libmachine: waiting for SSH...
	I1025 08:55:48.590040  108440 main.go:141] libmachine: Getting to WaitForSSH function...
	I1025 08:55:48.593143  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:48.593600  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:48.593633  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:48.593847  108440 main.go:141] libmachine: Using SSH client type: native
	I1025 08:55:48.594091  108440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1025 08:55:48.594104  108440 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1025 08:55:48.709535  108440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 08:55:48.709959  108440 main.go:141] libmachine: domain creation complete
	I1025 08:55:48.711615  108440 machine.go:93] provisionDockerMachine start ...
	I1025 08:55:48.714149  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:48.714544  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:48.714568  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:48.714762  108440 main.go:141] libmachine: Using SSH client type: native
	I1025 08:55:48.715025  108440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1025 08:55:48.715038  108440 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 08:55:48.829459  108440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 08:55:48.829487  108440 buildroot.go:166] provisioning hostname "addons-887867"
	I1025 08:55:48.832404  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:48.832889  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:48.832920  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:48.833172  108440 main.go:141] libmachine: Using SSH client type: native
	I1025 08:55:48.833401  108440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1025 08:55:48.833416  108440 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-887867 && echo "addons-887867" | sudo tee /etc/hostname
	I1025 08:55:48.968412  108440 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-887867
	
	I1025 08:55:48.972057  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:48.972587  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:48.972616  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:48.972915  108440 main.go:141] libmachine: Using SSH client type: native
	I1025 08:55:48.973121  108440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1025 08:55:48.973136  108440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-887867' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-887867/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-887867' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 08:55:49.101426  108440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 08:55:49.101462  108440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21794-103842/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-103842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-103842/.minikube}
	I1025 08:55:49.101500  108440 buildroot.go:174] setting up certificates
	I1025 08:55:49.101511  108440 provision.go:84] configureAuth start
	I1025 08:55:49.104196  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:49.104631  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:49.104657  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:49.106868  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:49.107169  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:49.107193  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:49.107305  108440 provision.go:143] copyHostCerts
	I1025 08:55:49.107403  108440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-103842/.minikube/ca.pem (1082 bytes)
	I1025 08:55:49.107545  108440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-103842/.minikube/cert.pem (1123 bytes)
	I1025 08:55:49.107609  108440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-103842/.minikube/key.pem (1675 bytes)
	I1025 08:55:49.107673  108440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-103842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca-key.pem org=jenkins.addons-887867 san=[127.0.0.1 192.168.39.204 addons-887867 localhost minikube]
	I1025 08:55:49.536010  108440 provision.go:177] copyRemoteCerts
	I1025 08:55:49.536073  108440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 08:55:49.538704  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:49.539088  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:49.539109  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:49.539248  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:55:49.627065  108440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 08:55:49.657042  108440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 08:55:49.686439  108440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 08:55:49.715497  108440 provision.go:87] duration metric: took 613.968045ms to configureAuth
	I1025 08:55:49.715532  108440 buildroot.go:189] setting minikube options for container-runtime
	I1025 08:55:49.715739  108440 config.go:182] Loaded profile config "addons-887867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:55:49.719081  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:49.719560  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:49.719590  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:49.719860  108440 main.go:141] libmachine: Using SSH client type: native
	I1025 08:55:49.720114  108440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1025 08:55:49.720137  108440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 08:55:49.969310  108440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 08:55:49.969347  108440 machine.go:96] duration metric: took 1.257707705s to provisionDockerMachine
	I1025 08:55:49.969359  108440 client.go:171] duration metric: took 21.492226364s to LocalClient.Create
	I1025 08:55:49.969383  108440 start.go:167] duration metric: took 21.49229077s to libmachine.API.Create "addons-887867"
	I1025 08:55:49.969397  108440 start.go:293] postStartSetup for "addons-887867" (driver="kvm2")
	I1025 08:55:49.969416  108440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 08:55:49.969504  108440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 08:55:49.972860  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:49.973340  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:49.973374  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:49.973532  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:55:50.063018  108440 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 08:55:50.067656  108440 info.go:137] Remote host: Buildroot 2025.02
	I1025 08:55:50.067686  108440 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-103842/.minikube/addons for local assets ...
	I1025 08:55:50.067755  108440 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-103842/.minikube/files for local assets ...
	I1025 08:55:50.067793  108440 start.go:296] duration metric: took 98.385755ms for postStartSetup
	I1025 08:55:50.070560  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:50.071001  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:50.071037  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:50.071315  108440 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/config.json ...
	I1025 08:55:50.071527  108440 start.go:128] duration metric: took 21.596326971s to createHost
	I1025 08:55:50.074049  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:50.074501  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:50.074525  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:50.074722  108440 main.go:141] libmachine: Using SSH client type: native
	I1025 08:55:50.074944  108440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1025 08:55:50.074955  108440 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 08:55:50.191388  108440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761382550.156496296
	
	I1025 08:55:50.191414  108440 fix.go:216] guest clock: 1761382550.156496296
	I1025 08:55:50.191423  108440 fix.go:229] Guest: 2025-10-25 08:55:50.156496296 +0000 UTC Remote: 2025-10-25 08:55:50.071540007 +0000 UTC m=+21.696498376 (delta=84.956289ms)
	I1025 08:55:50.191440  108440 fix.go:200] guest clock delta is within tolerance: 84.956289ms
	I1025 08:55:50.191446  108440 start.go:83] releasing machines lock for "addons-887867", held for 21.716346707s
	I1025 08:55:50.194474  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:50.194864  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:50.194888  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:50.195474  108440 ssh_runner.go:195] Run: cat /version.json
	I1025 08:55:50.195531  108440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 08:55:50.198225  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:50.198496  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:50.198555  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:50.198581  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:50.198724  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:55:50.198995  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:50.199030  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:50.199222  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:55:50.313652  108440 ssh_runner.go:195] Run: systemctl --version
	I1025 08:55:50.319844  108440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 08:55:50.478672  108440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 08:55:50.485253  108440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 08:55:50.485335  108440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 08:55:50.506311  108440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 08:55:50.506341  108440 start.go:495] detecting cgroup driver to use...
	I1025 08:55:50.506432  108440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 08:55:50.526953  108440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 08:55:50.544394  108440 docker.go:218] disabling cri-docker service (if available) ...
	I1025 08:55:50.544458  108440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 08:55:50.563903  108440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 08:55:50.580455  108440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 08:55:50.723019  108440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 08:55:50.940346  108440 docker.go:234] disabling docker service ...
	I1025 08:55:50.940423  108440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 08:55:50.956681  108440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 08:55:50.971822  108440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 08:55:51.129048  108440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 08:55:51.277618  108440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 08:55:51.296241  108440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 08:55:51.319818  108440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 08:55:51.319918  108440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:55:51.331892  108440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 08:55:51.331973  108440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:55:51.344451  108440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:55:51.357232  108440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:55:51.370137  108440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 08:55:51.383020  108440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:55:51.395855  108440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:55:51.416562  108440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:55:51.429380  108440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 08:55:51.440591  108440 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 08:55:51.440669  108440 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 08:55:51.460089  108440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 08:55:51.472089  108440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:55:51.610690  108440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 08:55:51.719951  108440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 08:55:51.720066  108440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 08:55:51.725466  108440 start.go:563] Will wait 60s for crictl version
	I1025 08:55:51.725545  108440 ssh_runner.go:195] Run: which crictl
	I1025 08:55:51.729674  108440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 08:55:51.771412  108440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 08:55:51.771530  108440 ssh_runner.go:195] Run: crio --version
	I1025 08:55:51.801957  108440 ssh_runner.go:195] Run: crio --version
	I1025 08:55:51.832877  108440 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1025 08:55:51.836647  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:51.837023  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:55:51.837047  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:55:51.837230  108440 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1025 08:55:51.842078  108440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:55:51.859045  108440 kubeadm.go:883] updating cluster {Name:addons-887867 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-887867 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 08:55:51.859164  108440 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:55:51.859209  108440 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:55:51.894605  108440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1025 08:55:51.894693  108440 ssh_runner.go:195] Run: which lz4
	I1025 08:55:51.898875  108440 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 08:55:51.903541  108440 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 08:55:51.903572  108440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1025 08:55:53.332363  108440 crio.go:462] duration metric: took 1.433515888s to copy over tarball
	I1025 08:55:53.332431  108440 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 08:55:55.018402  108440 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.685941119s)
	I1025 08:55:55.018433  108440 crio.go:469] duration metric: took 1.686042954s to extract the tarball
	I1025 08:55:55.018444  108440 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 08:55:55.059006  108440 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:55:55.105898  108440 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:55:55.105926  108440 cache_images.go:85] Images are preloaded, skipping loading
	I1025 08:55:55.105937  108440 kubeadm.go:934] updating node { 192.168.39.204 8443 v1.34.1 crio true true} ...
	I1025 08:55:55.106048  108440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-887867 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-887867 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 08:55:55.106127  108440 ssh_runner.go:195] Run: crio config
	I1025 08:55:55.152949  108440 cni.go:84] Creating CNI manager for ""
	I1025 08:55:55.152978  108440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 08:55:55.153001  108440 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 08:55:55.153023  108440 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.204 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-887867 NodeName:addons-887867 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 08:55:55.153152  108440 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-887867"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.204"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.204"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 08:55:55.153224  108440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 08:55:55.165581  108440 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 08:55:55.165661  108440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 08:55:55.180587  108440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1025 08:55:55.201226  108440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 08:55:55.221851  108440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1025 08:55:55.242033  108440 ssh_runner.go:195] Run: grep 192.168.39.204	control-plane.minikube.internal$ /etc/hosts
	I1025 08:55:55.246262  108440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:55:55.261015  108440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:55:55.404538  108440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:55:55.434130  108440 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867 for IP: 192.168.39.204
	I1025 08:55:55.434163  108440 certs.go:195] generating shared ca certs ...
	I1025 08:55:55.434193  108440 certs.go:227] acquiring lock for ca certs: {Name:mk3c196d72f190531a27a5874f74b0341375ed0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:55:55.434390  108440 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-103842/.minikube/ca.key
	I1025 08:55:55.544887  108440 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-103842/.minikube/ca.crt ...
	I1025 08:55:55.544924  108440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-103842/.minikube/ca.crt: {Name:mka9dfde2e025024e5263b30c69262cf609676ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:55:55.545107  108440 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-103842/.minikube/ca.key ...
	I1025 08:55:55.545118  108440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-103842/.minikube/ca.key: {Name:mke6ddd0f535db61b0c7abeecda855980b3f1869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:55:55.545196  108440 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-103842/.minikube/proxy-client-ca.key
	I1025 08:55:55.904644  108440 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-103842/.minikube/proxy-client-ca.crt ...
	I1025 08:55:55.904678  108440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-103842/.minikube/proxy-client-ca.crt: {Name:mka9cfb2d2dd2ffedc3953d7e4751520dfcf12c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:55:55.904878  108440 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-103842/.minikube/proxy-client-ca.key ...
	I1025 08:55:55.904890  108440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-103842/.minikube/proxy-client-ca.key: {Name:mkaad5c4045fe4670b4ce43107202870c2307062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:55:55.904972  108440 certs.go:257] generating profile certs ...
	I1025 08:55:55.905032  108440 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.key
	I1025 08:55:55.905046  108440 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt with IP's: []
	I1025 08:55:56.228519  108440 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt ...
	I1025 08:55:56.228550  108440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: {Name:mkd91a432092e603759216748b39c9b0413ac02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:55:56.228726  108440 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.key ...
	I1025 08:55:56.228738  108440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.key: {Name:mkb6220d19f267e2ffdb11c437365779bdb3dfbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:55:56.228824  108440 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/apiserver.key.67c76731
	I1025 08:55:56.228845  108440 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/apiserver.crt.67c76731 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.204]
	I1025 08:55:56.451373  108440 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/apiserver.crt.67c76731 ...
	I1025 08:55:56.451406  108440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/apiserver.crt.67c76731: {Name:mk9ad8ee832c8c4826326856dc311708c88c3bf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:55:56.451574  108440 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/apiserver.key.67c76731 ...
	I1025 08:55:56.451589  108440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/apiserver.key.67c76731: {Name:mk434c00b26d4fbf4db76ad97b5927dc76fc9cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:55:56.451664  108440 certs.go:382] copying /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/apiserver.crt.67c76731 -> /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/apiserver.crt
	I1025 08:55:56.451737  108440 certs.go:386] copying /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/apiserver.key.67c76731 -> /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/apiserver.key
	I1025 08:55:56.451794  108440 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/proxy-client.key
	I1025 08:55:56.451816  108440 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/proxy-client.crt with IP's: []
	I1025 08:55:56.479507  108440 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/proxy-client.crt ...
	I1025 08:55:56.479537  108440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/proxy-client.crt: {Name:mkbbe44e3de8dc827d18ac176a630c06ef35a4cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:55:56.479677  108440 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/proxy-client.key ...
	I1025 08:55:56.479691  108440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/proxy-client.key: {Name:mk7f74fab00cb7095e377ae6cfadd3129e5572c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:55:56.479866  108440 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 08:55:56.479899  108440 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca.pem (1082 bytes)
	I1025 08:55:56.479927  108440 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/cert.pem (1123 bytes)
	I1025 08:55:56.479949  108440 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/key.pem (1675 bytes)
	I1025 08:55:56.480582  108440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 08:55:56.511588  108440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 08:55:56.541734  108440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 08:55:56.571991  108440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 08:55:56.602692  108440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 08:55:56.632581  108440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 08:55:56.665039  108440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 08:55:56.697873  108440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 08:55:56.729145  108440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 08:55:56.759935  108440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 08:55:56.781489  108440 ssh_runner.go:195] Run: openssl version
	I1025 08:55:56.789086  108440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 08:55:56.803247  108440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:55:56.808664  108440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:55:56.808748  108440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:55:56.816540  108440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 08:55:56.835004  108440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 08:55:56.840589  108440 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 08:55:56.840655  108440 kubeadm.go:400] StartCluster: {Name:addons-887867 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-887867 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:55:56.840757  108440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:55:56.840836  108440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:55:56.887368  108440 cri.go:89] found id: ""
	I1025 08:55:56.887481  108440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 08:55:56.905280  108440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 08:55:56.917033  108440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 08:55:56.928811  108440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 08:55:56.928835  108440 kubeadm.go:157] found existing configuration files:
	
	I1025 08:55:56.928907  108440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 08:55:56.939542  108440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 08:55:56.939614  108440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 08:55:56.950985  108440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 08:55:56.961455  108440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 08:55:56.961527  108440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 08:55:56.973865  108440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 08:55:56.986639  108440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 08:55:56.986710  108440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 08:55:56.999948  108440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 08:55:57.011205  108440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 08:55:57.011284  108440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 08:55:57.022936  108440 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 08:55:57.175160  108440 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 08:56:08.192323  108440 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 08:56:08.192406  108440 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 08:56:08.192493  108440 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 08:56:08.192644  108440 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 08:56:08.192782  108440 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 08:56:08.192874  108440 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 08:56:08.194663  108440 out.go:252]   - Generating certificates and keys ...
	I1025 08:56:08.194835  108440 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 08:56:08.194929  108440 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 08:56:08.195027  108440 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 08:56:08.195088  108440 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 08:56:08.195143  108440 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 08:56:08.195185  108440 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 08:56:08.195229  108440 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 08:56:08.195370  108440 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-887867 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	I1025 08:56:08.195413  108440 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 08:56:08.195528  108440 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-887867 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	I1025 08:56:08.195663  108440 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 08:56:08.195754  108440 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 08:56:08.195848  108440 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 08:56:08.195939  108440 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 08:56:08.196016  108440 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 08:56:08.196097  108440 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 08:56:08.196175  108440 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 08:56:08.196264  108440 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 08:56:08.196339  108440 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 08:56:08.196445  108440 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 08:56:08.196545  108440 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 08:56:08.197994  108440 out.go:252]   - Booting up control plane ...
	I1025 08:56:08.198080  108440 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 08:56:08.198142  108440 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 08:56:08.198197  108440 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 08:56:08.198286  108440 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 08:56:08.198366  108440 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 08:56:08.198455  108440 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 08:56:08.198562  108440 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 08:56:08.198597  108440 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 08:56:08.198742  108440 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 08:56:08.198907  108440 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 08:56:08.198963  108440 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.816293ms
	I1025 08:56:08.199075  108440 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 08:56:08.199185  108440 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.204:8443/livez
	I1025 08:56:08.199292  108440 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 08:56:08.199401  108440 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 08:56:08.199510  108440 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.004946379s
	I1025 08:56:08.199614  108440 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.856137124s
	I1025 08:56:08.199718  108440 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002838811s
	I1025 08:56:08.199859  108440 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 08:56:08.200008  108440 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 08:56:08.200063  108440 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 08:56:08.200221  108440 kubeadm.go:318] [mark-control-plane] Marking the node addons-887867 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 08:56:08.200305  108440 kubeadm.go:318] [bootstrap-token] Using token: l3ld7b.whwlj9102ptz5w6p
	I1025 08:56:08.202589  108440 out.go:252]   - Configuring RBAC rules ...
	I1025 08:56:08.202683  108440 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 08:56:08.202762  108440 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 08:56:08.202931  108440 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 08:56:08.203088  108440 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 08:56:08.203180  108440 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 08:56:08.203248  108440 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 08:56:08.203337  108440 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 08:56:08.203377  108440 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 08:56:08.203415  108440 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 08:56:08.203420  108440 kubeadm.go:318] 
	I1025 08:56:08.203471  108440 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 08:56:08.203477  108440 kubeadm.go:318] 
	I1025 08:56:08.203572  108440 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 08:56:08.203581  108440 kubeadm.go:318] 
	I1025 08:56:08.203601  108440 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 08:56:08.203656  108440 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 08:56:08.203699  108440 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 08:56:08.203705  108440 kubeadm.go:318] 
	I1025 08:56:08.203793  108440 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 08:56:08.203813  108440 kubeadm.go:318] 
	I1025 08:56:08.203893  108440 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 08:56:08.203905  108440 kubeadm.go:318] 
	I1025 08:56:08.203979  108440 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 08:56:08.204079  108440 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 08:56:08.204174  108440 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 08:56:08.204188  108440 kubeadm.go:318] 
	I1025 08:56:08.204299  108440 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 08:56:08.204370  108440 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 08:56:08.204380  108440 kubeadm.go:318] 
	I1025 08:56:08.204482  108440 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token l3ld7b.whwlj9102ptz5w6p \
	I1025 08:56:08.204602  108440 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75020ef150409d170daf3c5ec4bf8a806eb596e2cc68a7a6759c166858211ccb \
	I1025 08:56:08.204644  108440 kubeadm.go:318] 	--control-plane 
	I1025 08:56:08.204659  108440 kubeadm.go:318] 
	I1025 08:56:08.204783  108440 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 08:56:08.204792  108440 kubeadm.go:318] 
	I1025 08:56:08.204896  108440 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token l3ld7b.whwlj9102ptz5w6p \
	I1025 08:56:08.205044  108440 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75020ef150409d170daf3c5ec4bf8a806eb596e2cc68a7a6759c166858211ccb 
	I1025 08:56:08.205059  108440 cni.go:84] Creating CNI manager for ""
	I1025 08:56:08.205070  108440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 08:56:08.206604  108440 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 08:56:08.207885  108440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 08:56:08.221343  108440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 08:56:08.245764  108440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 08:56:08.245890  108440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:56:08.245936  108440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-887867 minikube.k8s.io/updated_at=2025_10_25T08_56_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=addons-887867 minikube.k8s.io/primary=true
	I1025 08:56:08.388584  108440 ops.go:34] apiserver oom_adj: -16
	I1025 08:56:08.388638  108440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:56:08.889445  108440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:56:09.388690  108440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:56:09.888645  108440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:56:10.389490  108440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:56:10.889551  108440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:56:11.389701  108440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:56:11.888914  108440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:56:11.963754  108440 kubeadm.go:1113] duration metric: took 3.717951779s to wait for elevateKubeSystemPrivileges
	I1025 08:56:11.963832  108440 kubeadm.go:402] duration metric: took 15.123182703s to StartCluster
	I1025 08:56:11.963860  108440 settings.go:142] acquiring lock: {Name:mk3fbb1aeefa7e4423e1917520f38525e6bd947f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:56:11.964005  108440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-103842/kubeconfig
	I1025 08:56:11.964346  108440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-103842/kubeconfig: {Name:mk3d3f05e9f06ad659cee3399b3108e510d71411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:56:11.964528  108440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 08:56:11.964556  108440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:56:11.964641  108440 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 08:56:11.964795  108440 addons.go:69] Setting yakd=true in profile "addons-887867"
	I1025 08:56:11.964812  108440 addons.go:69] Setting gcp-auth=true in profile "addons-887867"
	I1025 08:56:11.964821  108440 addons.go:69] Setting metrics-server=true in profile "addons-887867"
	I1025 08:56:11.964824  108440 addons.go:238] Setting addon yakd=true in "addons-887867"
	I1025 08:56:11.964834  108440 addons.go:238] Setting addon metrics-server=true in "addons-887867"
	I1025 08:56:11.964835  108440 mustload.go:65] Loading cluster: addons-887867
	I1025 08:56:11.964842  108440 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-887867"
	I1025 08:56:11.964843  108440 addons.go:69] Setting ingress=true in profile "addons-887867"
	I1025 08:56:11.964861  108440 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-887867"
	I1025 08:56:11.964870  108440 addons.go:238] Setting addon ingress=true in "addons-887867"
	I1025 08:56:11.964872  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.964874  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.964899  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.964903  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.965022  108440 config.go:182] Loaded profile config "addons-887867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:56:11.965117  108440 addons.go:69] Setting cloud-spanner=true in profile "addons-887867"
	I1025 08:56:11.965142  108440 addons.go:238] Setting addon cloud-spanner=true in "addons-887867"
	I1025 08:56:11.965160  108440 addons.go:69] Setting ingress-dns=true in profile "addons-887867"
	I1025 08:56:11.965186  108440 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-887867"
	I1025 08:56:11.965202  108440 addons.go:69] Setting volcano=true in profile "addons-887867"
	I1025 08:56:11.965171  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.965215  108440 addons.go:238] Setting addon volcano=true in "addons-887867"
	I1025 08:56:11.965245  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.965624  108440 addons.go:69] Setting volumesnapshots=true in profile "addons-887867"
	I1025 08:56:11.965651  108440 addons.go:238] Setting addon volumesnapshots=true in "addons-887867"
	I1025 08:56:11.965679  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.964804  108440 addons.go:69] Setting inspektor-gadget=true in profile "addons-887867"
	I1025 08:56:11.965827  108440 addons.go:238] Setting addon inspektor-gadget=true in "addons-887867"
	I1025 08:56:11.965857  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.964826  108440 addons.go:69] Setting default-storageclass=true in profile "addons-887867"
	I1025 08:56:11.965906  108440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-887867"
	I1025 08:56:11.966067  108440 addons.go:69] Setting registry=true in profile "addons-887867"
	I1025 08:56:11.964798  108440 config.go:182] Loaded profile config "addons-887867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:56:11.966087  108440 addons.go:238] Setting addon registry=true in "addons-887867"
	I1025 08:56:11.964812  108440 addons.go:69] Setting registry-creds=true in profile "addons-887867"
	I1025 08:56:11.966139  108440 addons.go:238] Setting addon registry-creds=true in "addons-887867"
	I1025 08:56:11.966157  108440 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-887867"
	I1025 08:56:11.966165  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.966198  108440 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-887867"
	I1025 08:56:11.966224  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.966331  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.965204  108440 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-887867"
	I1025 08:56:11.966550  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.966736  108440 addons.go:69] Setting storage-provisioner=true in profile "addons-887867"
	I1025 08:56:11.966750  108440 addons.go:238] Setting addon storage-provisioner=true in "addons-887867"
	I1025 08:56:11.966819  108440 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-887867"
	I1025 08:56:11.966849  108440 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-887867"
	I1025 08:56:11.966852  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.965190  108440 addons.go:238] Setting addon ingress-dns=true in "addons-887867"
	I1025 08:56:11.967311  108440 out.go:179] * Verifying Kubernetes components...
	I1025 08:56:11.967315  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.968874  108440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:56:11.971655  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.972887  108440 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	W1025 08:56:11.973994  108440 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 08:56:11.974156  108440 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 08:56:11.974217  108440 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 08:56:11.974228  108440 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 08:56:11.974405  108440 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 08:56:11.975848  108440 addons.go:238] Setting addon default-storageclass=true in "addons-887867"
	I1025 08:56:11.977088  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.977521  108440 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 08:56:11.977544  108440 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 08:56:11.977928  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 08:56:11.977555  108440 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 08:56:11.977558  108440 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 08:56:11.978137  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 08:56:11.976113  108440 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 08:56:11.978166  108440 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 08:56:11.977607  108440 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-887867"
	I1025 08:56:11.978306  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:11.978390  108440 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:56:11.978399  108440 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 08:56:11.978396  108440 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 08:56:11.978399  108440 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 08:56:11.978420  108440 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1025 08:56:11.978449  108440 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 08:56:11.979984  108440 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 08:56:11.978465  108440 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 08:56:11.979569  108440 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 08:56:11.980155  108440 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 08:56:11.979761  108440 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 08:56:11.980195  108440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 08:56:11.980502  108440 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1025 08:56:11.980503  108440 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 08:56:11.980852  108440 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 08:56:11.981316  108440 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 08:56:11.981335  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 08:56:11.981370  108440 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 08:56:11.981383  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 08:56:11.981318  108440 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 08:56:11.981423  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 08:56:11.981996  108440 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 08:56:11.981997  108440 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:56:11.982045  108440 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 08:56:11.982151  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 08:56:11.981998  108440 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 08:56:11.983171  108440 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 08:56:11.983193  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 08:56:11.983866  108440 out.go:179]   - Using image docker.io/busybox:stable
	I1025 08:56:11.983913  108440 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 08:56:11.983927  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 08:56:11.983866  108440 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 08:56:11.985492  108440 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 08:56:11.985547  108440 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 08:56:11.986667  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.986959  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.987063  108440 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 08:56:11.987081  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 08:56:11.988053  108440 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 08:56:11.988398  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:11.988439  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.988492  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.988914  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:11.988948  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.989528  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:11.990331  108440 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 08:56:11.990328  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:11.990688  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:11.990734  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.991170  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.991277  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:11.992313  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.992476  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.992716  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:11.992750  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.993149  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.993217  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:11.993255  108440 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 08:56:11.994004  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:11.994038  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.994223  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:11.994257  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.994300  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.994676  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:11.994735  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.994760  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:11.995205  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.995300  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:11.995333  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.995442  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.995501  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.995826  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:11.995843  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:11.995863  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.995877  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.995965  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:11.996110  108440 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 08:56:11.996421  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:11.996442  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:11.996470  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.996507  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:11.996623  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:11.996636  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.996965  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:11.996976  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.997000  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.997174  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:11.997183  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:11.997449  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:11.997788  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:11.997820  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.997941  108440 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 08:56:11.997961  108440 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 08:56:11.998050  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:11.998176  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.998760  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:11.998804  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:11.998993  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:12.000801  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:12.001185  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:12.001205  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:12.001341  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	W1025 08:56:12.142442  108440 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53286->192.168.39.204:22: read: connection reset by peer
	I1025 08:56:12.142479  108440 retry.go:31] will retry after 223.966824ms: ssh: handshake failed: read tcp 192.168.39.1:53286->192.168.39.204:22: read: connection reset by peer
	W1025 08:56:12.174025  108440 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53300->192.168.39.204:22: read: connection reset by peer
	I1025 08:56:12.174068  108440 retry.go:31] will retry after 330.261054ms: ssh: handshake failed: read tcp 192.168.39.1:53300->192.168.39.204:22: read: connection reset by peer
	I1025 08:56:12.543866  108440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:56:12.543951  108440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 08:56:12.577014  108440 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 08:56:12.577037  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 08:56:12.583872  108440 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 08:56:12.583897  108440 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 08:56:12.609030  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 08:56:12.640896  108440 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 08:56:12.640937  108440 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 08:56:12.661223  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 08:56:12.673413  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 08:56:12.683456  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 08:56:12.684272  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 08:56:12.695505  108440 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 08:56:12.695531  108440 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 08:56:12.738286  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 08:56:12.759585  108440 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 08:56:12.759619  108440 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 08:56:12.784459  108440 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:56:12.784491  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 08:56:12.824323  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 08:56:12.876036  108440 node_ready.go:35] waiting up to 6m0s for node "addons-887867" to be "Ready" ...
	I1025 08:56:12.902255  108440 node_ready.go:49] node "addons-887867" is "Ready"
	I1025 08:56:12.902305  108440 node_ready.go:38] duration metric: took 26.216366ms for node "addons-887867" to be "Ready" ...
	I1025 08:56:12.902325  108440 api_server.go:52] waiting for apiserver process to appear ...
	I1025 08:56:12.902393  108440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:56:13.104383  108440 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 08:56:13.104410  108440 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 08:56:13.108107  108440 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 08:56:13.108128  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 08:56:13.120442  108440 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 08:56:13.120472  108440 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 08:56:13.129632  108440 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 08:56:13.129659  108440 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 08:56:13.132424  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:56:13.374374  108440 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 08:56:13.374407  108440 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 08:56:13.473603  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 08:56:13.591135  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 08:56:13.613846  108440 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 08:56:13.613877  108440 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 08:56:13.665379  108440 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 08:56:13.665415  108440 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 08:56:13.717787  108440 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 08:56:13.717824  108440 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 08:56:13.814515  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 08:56:13.985976  108440 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 08:56:13.986018  108440 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 08:56:14.325673  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 08:56:14.401716  108440 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 08:56:14.401759  108440 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 08:56:14.458185  108440 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 08:56:14.458218  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 08:56:14.706504  108440 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 08:56:14.706529  108440 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 08:56:14.952420  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 08:56:15.091587  108440 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 08:56:15.091621  108440 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 08:56:15.199331  108440 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:56:15.199353  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 08:56:15.435642  108440 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 08:56:15.435672  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 08:56:15.570839  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:56:15.768217  108440 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 08:56:15.768243  108440 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 08:56:16.291505  108440 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 08:56:16.291548  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 08:56:16.478471  108440 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.934483254s)
	I1025 08:56:16.478510  108440 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1025 08:56:16.478478  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.869409092s)
	I1025 08:56:16.914341  108440 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 08:56:16.914372  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 08:56:16.991968  108440 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-887867" context rescaled to 1 replicas
	I1025 08:56:17.317419  108440 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 08:56:17.317449  108440 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 08:56:17.579299  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 08:56:19.449015  108440 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 08:56:19.452243  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:19.452761  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:19.452807  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:19.452990  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:19.925754  108440 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 08:56:20.138538  108440 addons.go:238] Setting addon gcp-auth=true in "addons-887867"
	I1025 08:56:20.138607  108440 host.go:66] Checking if "addons-887867" exists ...
	I1025 08:56:20.140926  108440 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 08:56:20.144294  108440 main.go:141] libmachine: domain addons-887867 has defined MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:20.144922  108440 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:42:92", ip: ""} in network mk-addons-887867: {Iface:virbr1 ExpiryTime:2025-10-25 09:55:44 +0000 UTC Type:0 Mac:52:54:00:3c:42:92 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-887867 Clientid:01:52:54:00:3c:42:92}
	I1025 08:56:20.144965  108440 main.go:141] libmachine: domain addons-887867 has defined IP address 192.168.39.204 and MAC address 52:54:00:3c:42:92 in network mk-addons-887867
	I1025 08:56:20.145169  108440 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/addons-887867/id_rsa Username:docker}
	I1025 08:56:21.118503  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.457232277s)
	I1025 08:56:21.118557  108440 addons.go:479] Verifying addon ingress=true in "addons-887867"
	I1025 08:56:21.118582  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.445129118s)
	I1025 08:56:21.118714  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.380384891s)
	I1025 08:56:21.118643  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.435150405s)
	I1025 08:56:21.118660  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.434368794s)
	I1025 08:56:21.118789  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.294436238s)
	I1025 08:56:21.118835  108440 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (8.216415915s)
	I1025 08:56:21.118857  108440 api_server.go:72] duration metric: took 9.154266556s to wait for apiserver process to appear ...
	I1025 08:56:21.118865  108440 api_server.go:88] waiting for apiserver healthz status ...
	I1025 08:56:21.118888  108440 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1025 08:56:21.119007  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.986488127s)
	I1025 08:56:21.119042  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.645407591s)
	W1025 08:56:21.119049  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:21.119130  108440 retry.go:31] will retry after 373.119516ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:21.119103  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.527934171s)
	I1025 08:56:21.119199  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.304653773s)
	I1025 08:56:21.119225  108440 addons.go:479] Verifying addon registry=true in "addons-887867"
	I1025 08:56:21.119259  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.79354795s)
	I1025 08:56:21.119280  108440 addons.go:479] Verifying addon metrics-server=true in "addons-887867"
	I1025 08:56:21.119320  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.166874398s)
	I1025 08:56:21.120895  108440 out.go:179] * Verifying ingress addon...
	I1025 08:56:21.121672  108440 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-887867 service yakd-dashboard -n yakd-dashboard
	
	I1025 08:56:21.121683  108440 out.go:179] * Verifying registry addon...
	I1025 08:56:21.123121  108440 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 08:56:21.124139  108440 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 08:56:21.145026  108440 api_server.go:279] https://192.168.39.204:8443/healthz returned 200:
	ok
	I1025 08:56:21.155865  108440 api_server.go:141] control plane version: v1.34.1
	I1025 08:56:21.155896  108440 api_server.go:131] duration metric: took 37.024901ms to wait for apiserver health ...
	I1025 08:56:21.155906  108440 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 08:56:21.226058  108440 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 08:56:21.226084  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:21.226137  108440 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 08:56:21.226162  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:21.226491  108440 system_pods.go:59] 15 kube-system pods found
	I1025 08:56:21.226524  108440 system_pods.go:61] "amd-gpu-device-plugin-xthsd" [0b401c3c-d12c-4107-b50b-be92186820c4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 08:56:21.226534  108440 system_pods.go:61] "coredns-66bc5c9577-kfn8k" [dd63b9d5-2d5e-4b66-a068-ad7e90ff40bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:56:21.226547  108440 system_pods.go:61] "coredns-66bc5c9577-sqn2j" [e44fc56e-c094-415f-842c-0264d4cc2754] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:56:21.226560  108440 system_pods.go:61] "etcd-addons-887867" [4a2638a0-d920-407f-82ac-9b36228ca83b] Running
	I1025 08:56:21.226566  108440 system_pods.go:61] "kube-apiserver-addons-887867" [45999a40-70ed-4fb5-8e10-45849dbdf686] Running
	I1025 08:56:21.226572  108440 system_pods.go:61] "kube-controller-manager-addons-887867" [c5694e34-14e0-4084-8f05-23451751d41b] Running
	I1025 08:56:21.226581  108440 system_pods.go:61] "kube-ingress-dns-minikube" [bedd5467-6d60-4f43-b94e-eaa035a33fa6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 08:56:21.226591  108440 system_pods.go:61] "kube-proxy-nknsl" [476a317c-e1e7-41c0-bc57-b8a6de0e4cd5] Running
	I1025 08:56:21.226598  108440 system_pods.go:61] "kube-scheduler-addons-887867" [16386f92-9ea0-4308-b22f-130607a58ca4] Running
	I1025 08:56:21.226609  108440 system_pods.go:61] "metrics-server-85b7d694d7-ghqsd" [518fa040-cf86-462a-b880-49bbd614627a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 08:56:21.226623  108440 system_pods.go:61] "nvidia-device-plugin-daemonset-pmvsc" [1a7a19ae-d10d-485f-a8b7-b25acbe309b2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 08:56:21.226634  108440 system_pods.go:61] "registry-6b586f9694-7dz5f" [1edd293c-e746-4c50-959c-670be14152eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 08:56:21.226647  108440 system_pods.go:61] "registry-creds-764b6fb674-kdk2t" [64035fd9-c7a2-4bc7-9d64-3627d003f85b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:56:21.226656  108440 system_pods.go:61] "registry-proxy-m5q4j" [270a52d1-0da0-45c7-a5df-ca1ec37ad476] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 08:56:21.226667  108440 system_pods.go:61] "storage-provisioner" [46dd9b16-a8d0-487b-85f8-67e66a6f8fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:56:21.226678  108440 system_pods.go:74] duration metric: took 70.764875ms to wait for pod list to return data ...
	I1025 08:56:21.226699  108440 default_sa.go:34] waiting for default service account to be created ...
	W1025 08:56:21.259375  108440 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1025 08:56:21.280522  108440 default_sa.go:45] found service account: "default"
	I1025 08:56:21.280551  108440 default_sa.go:55] duration metric: took 53.842524ms for default service account to be created ...
	I1025 08:56:21.280564  108440 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 08:56:21.366985  108440 system_pods.go:86] 15 kube-system pods found
	I1025 08:56:21.367037  108440 system_pods.go:89] "amd-gpu-device-plugin-xthsd" [0b401c3c-d12c-4107-b50b-be92186820c4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 08:56:21.367064  108440 system_pods.go:89] "coredns-66bc5c9577-kfn8k" [dd63b9d5-2d5e-4b66-a068-ad7e90ff40bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:56:21.367088  108440 system_pods.go:89] "coredns-66bc5c9577-sqn2j" [e44fc56e-c094-415f-842c-0264d4cc2754] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:56:21.367096  108440 system_pods.go:89] "etcd-addons-887867" [4a2638a0-d920-407f-82ac-9b36228ca83b] Running
	I1025 08:56:21.367106  108440 system_pods.go:89] "kube-apiserver-addons-887867" [45999a40-70ed-4fb5-8e10-45849dbdf686] Running
	I1025 08:56:21.367112  108440 system_pods.go:89] "kube-controller-manager-addons-887867" [c5694e34-14e0-4084-8f05-23451751d41b] Running
	I1025 08:56:21.367126  108440 system_pods.go:89] "kube-ingress-dns-minikube" [bedd5467-6d60-4f43-b94e-eaa035a33fa6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 08:56:21.367137  108440 system_pods.go:89] "kube-proxy-nknsl" [476a317c-e1e7-41c0-bc57-b8a6de0e4cd5] Running
	I1025 08:56:21.367145  108440 system_pods.go:89] "kube-scheduler-addons-887867" [16386f92-9ea0-4308-b22f-130607a58ca4] Running
	I1025 08:56:21.367156  108440 system_pods.go:89] "metrics-server-85b7d694d7-ghqsd" [518fa040-cf86-462a-b880-49bbd614627a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 08:56:21.367167  108440 system_pods.go:89] "nvidia-device-plugin-daemonset-pmvsc" [1a7a19ae-d10d-485f-a8b7-b25acbe309b2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 08:56:21.367178  108440 system_pods.go:89] "registry-6b586f9694-7dz5f" [1edd293c-e746-4c50-959c-670be14152eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 08:56:21.367188  108440 system_pods.go:89] "registry-creds-764b6fb674-kdk2t" [64035fd9-c7a2-4bc7-9d64-3627d003f85b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:56:21.367196  108440 system_pods.go:89] "registry-proxy-m5q4j" [270a52d1-0da0-45c7-a5df-ca1ec37ad476] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 08:56:21.367206  108440 system_pods.go:89] "storage-provisioner" [46dd9b16-a8d0-487b-85f8-67e66a6f8fb4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:56:21.367216  108440 system_pods.go:126] duration metric: took 86.643963ms to wait for k8s-apps to be running ...
	I1025 08:56:21.367229  108440 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 08:56:21.367285  108440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:56:21.492408  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:56:21.598422  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.027525673s)
	W1025 08:56:21.598485  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 08:56:21.598519  108440 retry.go:31] will retry after 289.960328ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 08:56:21.697525  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:21.697549  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:21.889266  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:56:22.138495  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:22.139451  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:22.667673  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:22.667671  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:22.803357  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.224007897s)
	I1025 08:56:22.803357  108440 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.662403164s)
	I1025 08:56:22.803405  108440 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-887867"
	I1025 08:56:22.803438  108440 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.43610777s)
	I1025 08:56:22.803473  108440 system_svc.go:56] duration metric: took 1.436238133s WaitForService to wait for kubelet
	I1025 08:56:22.803583  108440 kubeadm.go:586] duration metric: took 10.838984883s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:56:22.803612  108440 node_conditions.go:102] verifying NodePressure condition ...
	I1025 08:56:22.805931  108440 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:56:22.805925  108440 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 08:56:22.808254  108440 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 08:56:22.808900  108440 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 08:56:22.809848  108440 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 08:56:22.809888  108440 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 08:56:22.828416  108440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 08:56:22.828459  108440 node_conditions.go:123] node cpu capacity is 2
	I1025 08:56:22.828478  108440 node_conditions.go:105] duration metric: took 24.858669ms to run NodePressure ...
	I1025 08:56:22.828494  108440 start.go:241] waiting for startup goroutines ...
	I1025 08:56:22.831014  108440 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 08:56:22.831036  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:23.005682  108440 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 08:56:23.005716  108440 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 08:56:23.105814  108440 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 08:56:23.105845  108440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 08:56:23.143253  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:23.144728  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:23.287335  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 08:56:23.331876  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:23.636475  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:23.636828  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:23.817486  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:24.131284  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:24.131970  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:24.315818  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:24.634040  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:24.634230  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:24.827863  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:24.932957  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.440500106s)
	W1025 08:56:24.933003  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:24.933044  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.043747876s)
	I1025 08:56:24.933059  108440 retry.go:31] will retry after 254.111556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:25.187645  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:56:25.200508  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:25.200712  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:25.325054  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.037660671s)
	I1025 08:56:25.326289  108440 addons.go:479] Verifying addon gcp-auth=true in "addons-887867"
	I1025 08:56:25.328223  108440 out.go:179] * Verifying gcp-auth addon...
	I1025 08:56:25.330654  108440 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 08:56:25.343689  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:25.348727  108440 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 08:56:25.348758  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:25.636886  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:25.637024  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:25.820931  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:25.835252  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:26.129706  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:26.131267  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:26.317986  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:26.335898  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:26.629985  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:26.630058  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:26.812307  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:26.814586  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.626887294s)
	W1025 08:56:26.814630  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:26.814655  108440 retry.go:31] will retry after 825.054864ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:26.833972  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:27.128501  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:27.128728  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:27.314104  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:27.334101  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:27.630377  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:27.631816  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:27.640785  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:56:27.813268  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:27.836272  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:28.129513  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:28.131318  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:28.314993  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:28.335393  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:28.630713  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:28.634442  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:28.815958  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:28.835153  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:28.931831  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.291001744s)
	W1025 08:56:28.931903  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:28.931926  108440 retry.go:31] will retry after 622.724456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:29.130163  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:29.131371  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:29.314351  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:29.335753  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:29.555042  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:56:29.629505  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:29.633689  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:29.815980  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:29.834831  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:30.131284  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:30.134652  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:30.314837  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:30.336593  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:30.632161  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:30.634094  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:30.666120  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.111026075s)
	W1025 08:56:30.666166  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:30.666191  108440 retry.go:31] will retry after 1.304384536s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:30.813468  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:30.837022  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:31.128284  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:31.128316  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:31.318143  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:31.336182  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:31.630346  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:31.630931  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:31.814202  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:31.835866  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:31.971042  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:56:32.128215  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:32.128966  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:32.315388  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:32.335654  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:32.630012  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:32.630607  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:32.813608  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:32.835236  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:33.089043  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.117959452s)
	W1025 08:56:33.089098  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:33.089124  108440 retry.go:31] will retry after 1.863231245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:33.129930  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:33.226090  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:33.317864  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:33.337093  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:33.626841  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:33.630659  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:33.812959  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:33.837366  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:34.132440  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:34.133586  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:34.314402  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:34.335964  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:34.632340  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:34.633563  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:34.812863  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:34.836810  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:34.952916  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:56:35.140177  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:35.153566  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:35.480716  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:35.481426  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:35.634185  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:35.635924  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:35.815272  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:35.835538  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:36.074783  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.121811633s)
	W1025 08:56:36.074827  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:36.074847  108440 retry.go:31] will retry after 3.78526907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:36.131838  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:36.131963  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:36.313342  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:36.335897  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:36.629588  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:36.630575  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:36.814252  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:36.835533  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:37.128854  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:37.134070  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:37.314903  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:37.335444  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:38.189960  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:38.189993  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:38.195506  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:38.197843  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:38.199038  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:38.201590  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:38.314086  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:38.335166  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:38.628282  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:38.629319  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:38.812696  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:38.838217  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:39.127852  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:39.128009  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:39.312589  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:39.334970  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:39.631791  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:39.631901  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:39.813996  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:39.834836  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:39.861041  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:56:40.128341  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:40.131536  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:40.313658  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:40.334312  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:40.628569  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:40.630669  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:40.816460  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:40.834518  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:56:40.835469  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:40.835496  108440 retry.go:31] will retry after 4.011993458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:41.127709  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:41.130585  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:41.317498  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:41.337005  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:41.752702  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:41.755426  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:41.813900  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:41.833564  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:42.129892  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:42.130972  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:42.312963  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:42.333765  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:42.627305  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:42.627590  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:42.813288  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:42.834533  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:43.127715  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:43.127785  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:43.313761  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:43.334998  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:43.626576  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:43.627135  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:43.813174  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:43.834810  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:44.128709  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:44.129178  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:44.315274  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:44.334647  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:44.628053  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:44.629888  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:44.815745  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:44.835207  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:44.848324  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:56:45.128267  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:45.128703  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:45.314263  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:45.333607  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:45.632815  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:45.633977  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:45.814140  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:45.836601  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:45.961747  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.11337857s)
	W1025 08:56:45.961812  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:45.961835  108440 retry.go:31] will retry after 9.27720572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:46.131454  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:46.131561  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:46.313644  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:46.334175  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:46.630523  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:46.630972  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:46.816037  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:46.834235  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:47.129738  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:47.129806  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:47.314441  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:47.335545  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:47.627042  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:47.630743  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:47.813450  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:47.834745  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:48.128399  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:48.130014  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:48.567069  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:48.568110  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:48.627928  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:48.628554  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:48.814794  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:48.836565  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:49.128556  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:49.129605  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:49.313697  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:49.334448  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:49.626559  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:49.628622  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:49.813351  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:49.836380  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:50.129218  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:50.129351  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:50.313520  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:50.335172  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:50.627482  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:50.630400  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:50.813591  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:50.835347  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:51.128885  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:51.128928  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:51.312190  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:51.334128  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:51.628517  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:51.628943  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:51.813700  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:51.835144  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:52.128079  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:52.128225  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:52.314677  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:52.335605  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:52.629196  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:52.629509  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:52.813624  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:52.913801  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:53.132467  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:53.132530  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:53.314040  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:53.333941  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:53.626407  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:53.629251  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:53.812859  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:53.833891  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:54.283461  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:54.284097  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:54.312805  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:54.335386  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:54.628711  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:54.629237  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:54.812922  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:54.835345  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:55.129004  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:55.129041  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:55.240180  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:56:55.313489  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:55.335323  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:55.629164  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:55.629977  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:55.813286  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:55.837706  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:56.128495  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:56.131945  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:56.313920  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:56.333881  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:56.367946  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.127704937s)
	W1025 08:56:56.367998  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:56.368021  108440 retry.go:31] will retry after 9.734487678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:56:56.634269  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:56.732869  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:56.814791  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:56.836492  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:57.130977  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:57.131944  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:57.314223  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:57.335227  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:57.626325  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:57.629288  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:57.814519  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:57.842490  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:58.129332  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:58.129613  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:58.313873  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:58.334275  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:58.626959  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:58.627241  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:58.969001  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:58.976259  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:59.460930  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:59.465473  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:59.466000  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:56:59.466228  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:59.629739  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:56:59.630585  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:56:59.813075  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:56:59.834430  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:00.130172  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:00.130287  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:00.314695  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:00.339960  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:00.631328  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:00.631476  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:00.816135  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:00.836781  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:01.131413  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:01.131440  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:01.313857  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:01.338784  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:01.629424  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:01.629549  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:01.813883  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:01.836109  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:02.127297  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:02.129085  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:02.312970  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:02.334663  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:02.628634  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:02.631893  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:02.814544  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:02.835091  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:03.128588  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:03.128920  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:03.313149  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:03.334850  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:03.627826  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:03.629244  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:03.814644  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:03.835420  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:04.127019  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:04.128302  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:04.313456  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:04.334549  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:04.627205  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:04.627348  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:04.813205  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:04.834195  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:05.133881  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:05.134745  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:05.315361  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:05.336079  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:05.629114  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:05.630284  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:05.815184  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:05.833691  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:06.102923  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:57:06.131168  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:06.135088  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:06.313725  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:06.335132  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:06.632952  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:06.632985  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:06.815200  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:06.837158  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:07.132887  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:07.133235  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:07.181980  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.079002181s)
	W1025 08:57:07.182046  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:57:07.182082  108440 retry.go:31] will retry after 19.396763688s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:57:07.312946  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:07.333753  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:07.631283  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:07.631841  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:07.813224  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:07.834214  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:08.126687  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:08.127221  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:57:08.312863  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:08.335417  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:08.628167  108440 kapi.go:107] duration metric: took 47.504025133s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 08:57:08.628268  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:08.813483  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:08.833920  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:09.128894  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:09.315973  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:09.335375  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:09.627763  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:09.814671  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:09.835073  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:10.126693  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:10.314872  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:10.337297  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:10.629085  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:10.812443  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:10.836991  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:11.167346  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:11.313551  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:11.335787  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:11.628111  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:11.813802  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:11.835943  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:12.126713  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:12.314018  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:12.333880  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:12.629836  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:12.814495  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:12.834521  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:13.128253  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:13.317934  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:13.335358  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:13.629261  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:13.820780  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:13.843218  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:14.128430  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:14.324342  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:14.356363  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:14.628232  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:14.815985  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:14.835363  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:15.128869  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:15.315471  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:15.335500  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:15.627729  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:15.818483  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:15.838615  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:16.129108  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:16.314214  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:16.335537  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:16.666914  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:16.814481  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:16.834672  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:17.128170  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:17.313877  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:17.333809  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:17.627568  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:17.814473  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:17.835337  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:18.127399  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:18.312997  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:18.334077  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:18.626703  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:18.815013  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:18.834756  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:19.129225  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:19.317902  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:19.336538  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:19.632948  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:19.823404  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:19.834328  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:20.132472  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:20.316614  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:20.335295  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:20.627398  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:20.813393  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:20.835898  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:21.128348  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:21.313184  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:21.334562  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:21.628617  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:21.815000  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:21.835424  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:22.127834  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:22.313753  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:22.335440  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:22.627877  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:22.819846  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:22.838291  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:23.127012  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:23.312785  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:23.334567  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:23.627331  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:23.813077  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:23.834263  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:24.126446  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:24.313419  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:24.334297  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:24.627372  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:24.812999  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:24.833829  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:25.127227  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:25.316992  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:25.335192  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:25.627533  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:25.813429  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:25.834320  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:26.126554  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:26.313421  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:26.334784  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:26.579032  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:57:26.650530  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:26.815216  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:26.836026  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:27.129618  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:27.319534  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:27.334814  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:27.613725  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.03465214s)
	W1025 08:57:27.613785  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:57:27.613811  108440 retry.go:31] will retry after 18.826199227s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:57:27.627388  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:27.814632  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:27.835609  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:28.127374  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:28.312725  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:28.335140  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:28.627594  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:28.818374  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:28.835106  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:29.127064  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:29.312545  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:29.334968  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:29.626595  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:29.813127  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:29.834403  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:30.126960  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:30.313013  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:30.333483  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:30.627236  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:30.812764  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:30.834921  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:31.127967  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:31.311957  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:31.334710  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:31.628075  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:31.812908  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:31.834510  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:32.127998  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:32.312226  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:32.334839  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:32.628180  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:32.813051  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:32.835402  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:33.127984  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:33.315027  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:33.335430  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:33.627964  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:33.812340  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:33.834148  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:34.127462  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:34.314043  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:34.335529  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:34.627787  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:34.813504  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:34.835721  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:35.129925  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:35.313355  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:35.335224  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:35.630339  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:35.814628  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:35.836593  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:36.127345  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:36.313299  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:36.336260  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:36.627332  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:36.813195  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:36.834979  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:37.129467  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:37.314170  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:37.336553  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:37.629243  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:37.814616  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:37.834268  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:38.127893  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:38.313413  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:38.334621  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:38.627195  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:38.812591  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:38.834741  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:39.128051  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:39.313002  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:39.334243  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:39.628716  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:39.814747  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:39.836002  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:40.127364  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:40.317117  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:40.336944  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:40.629058  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:40.812310  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:40.835475  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:41.128172  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:41.313132  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:41.334366  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:41.628941  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:41.813616  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:41.835291  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:42.128049  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:42.313134  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:42.334425  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:42.631017  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:42.814101  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:42.835127  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:43.127794  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:43.314065  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:43.333749  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:43.631447  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:43.815017  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:43.833973  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:44.131090  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:44.341399  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:44.343153  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:44.626887  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:44.814064  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:44.836727  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:45.128104  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:45.315993  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:45.339616  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:45.631468  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:45.816285  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:45.914227  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:46.127822  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:46.313610  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:46.335929  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:46.441064  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:57:46.629901  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:46.815249  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:46.836551  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:47.129086  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:47.316573  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:47.336654  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:47.630065  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:47.818129  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:47.836994  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:47.891310  108440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.450177727s)
	W1025 08:57:47.891349  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:57:47.891376  108440 retry.go:31] will retry after 23.410832981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:57:48.127626  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:48.313085  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:48.337304  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:48.630327  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:48.817347  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:48.850669  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:49.127553  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:49.312535  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:49.336003  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:49.629059  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:49.814332  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:49.834437  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:50.260533  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:50.314902  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:50.339059  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:50.627401  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:50.812761  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:50.835639  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:51.128385  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:51.313662  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:51.335660  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:51.628236  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:52.017833  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:52.018078  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:52.129006  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:52.313347  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:57:52.335084  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:52.629282  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:52.813848  108440 kapi.go:107] duration metric: took 1m30.004939083s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 08:57:52.834048  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:53.127133  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:53.335347  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:53.627230  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:53.835535  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:54.127826  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:54.334206  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:54.627892  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:54.834610  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:55.127572  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:55.334731  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:55.627683  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:55.834089  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:56.126928  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:56.334102  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:56.627281  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:56.834321  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:57.127051  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:57.334498  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:57.627411  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:57.835642  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:58.127442  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:58.334993  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:58.626932  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:58.834044  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:59.127959  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:59.334443  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:57:59.627608  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:57:59.835849  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:00.127886  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:00.334687  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:00.629109  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:00.835434  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:01.127265  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:01.334465  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:01.627941  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:01.834783  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:02.128226  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:02.334333  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:02.627289  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:02.834253  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:03.128582  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:03.335298  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:03.628448  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:03.835237  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:04.126606  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:04.334834  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:04.627651  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:04.835448  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:05.127720  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:05.335702  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:05.628380  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:05.835298  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:06.126849  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:06.334311  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:06.627944  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:06.834296  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:07.126706  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:07.333837  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:07.627850  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:07.834880  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:08.127966  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:08.334604  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:08.628716  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:08.835082  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:09.126472  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:09.335208  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:09.627828  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:09.835966  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:10.126842  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:10.334729  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:10.629661  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:10.834011  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:11.128188  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:11.302380  108440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:58:11.334175  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:11.634013  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:11.838244  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:58:12.065587  108440 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 08:58:12.065710  108440 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 08:58:12.127338  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:12.334747  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:12.628367  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:12.834994  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:13.128068  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:13.335052  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:13.626864  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:13.834276  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:14.127422  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:14.335371  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:14.628386  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:14.834226  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:15.126723  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:15.334400  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:15.627320  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:15.837414  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:16.127070  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:16.334536  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:16.628002  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:16.834215  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:17.127253  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:17.334736  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:17.627232  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:17.836747  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:18.128491  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:18.335137  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:18.627154  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:18.834463  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:19.127218  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:19.334441  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:19.627648  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:19.834887  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:20.128223  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:20.334375  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:20.628169  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:20.834620  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:21.128477  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:21.335569  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:21.627618  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:21.833425  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:22.128489  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:22.336729  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:22.627738  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:22.833878  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:23.128220  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:23.335297  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:23.627488  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:23.835120  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:24.127383  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:24.334956  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:24.626505  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:24.834736  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:25.128837  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:25.333831  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:25.628157  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:25.835158  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:26.128487  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:26.333933  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:26.628558  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:26.834957  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:27.126333  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:27.334801  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:27.627461  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:27.835164  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:28.127355  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:28.334862  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:28.627157  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:28.834402  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:29.127499  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:29.334847  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:29.627819  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:29.833986  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:30.128499  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:30.334261  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:30.626865  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:30.834580  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:31.127351  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:31.334898  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:31.627904  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:31.834300  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:32.128452  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:32.335000  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:32.626550  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:32.834961  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:33.128463  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:33.335696  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:33.627786  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:33.834356  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:34.127488  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:34.334638  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:34.627503  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:34.835321  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:35.126954  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:35.334191  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:35.627013  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:35.834210  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:36.128027  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:36.334331  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:36.628027  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:36.834468  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:37.127473  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:37.334648  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:37.627736  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:37.835021  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:38.126681  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:38.335270  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:38.627974  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:38.835081  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:39.127094  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:39.335404  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:39.629254  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:39.835345  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:40.129313  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:40.334833  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:40.630081  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:40.835823  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:41.128450  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:41.337714  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:41.632369  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:41.836980  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:42.127970  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:42.335551  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:42.627871  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:42.835230  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:43.131530  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:43.338669  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:43.628547  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:43.837239  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:44.130279  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:44.337184  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:44.627308  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:44.834711  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:45.129359  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:45.335699  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:45.627358  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:45.836309  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:46.127487  108440 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:58:46.334458  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:46.627869  108440 kapi.go:107] duration metric: took 2m25.504745249s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 08:58:46.834123  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:47.369969  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:47.834912  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:48.335331  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:48.836290  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:49.333929  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:49.836347  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:50.336610  108440 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:58:50.834691  108440 kapi.go:107] duration metric: took 2m25.504031434s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 08:58:50.836484  108440 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-887867 cluster.
	I1025 08:58:50.837810  108440 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 08:58:50.839112  108440 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 08:58:50.840420  108440 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1025 08:58:50.841708  108440 addons.go:514] duration metric: took 2m38.877076572s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1025 08:58:50.841756  108440 start.go:246] waiting for cluster config update ...
	I1025 08:58:50.841789  108440 start.go:255] writing updated cluster config ...
	I1025 08:58:50.842112  108440 ssh_runner.go:195] Run: rm -f paused
	I1025 08:58:50.848652  108440 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 08:58:50.852530  108440 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sqn2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:58:50.857994  108440 pod_ready.go:94] pod "coredns-66bc5c9577-sqn2j" is "Ready"
	I1025 08:58:50.858025  108440 pod_ready.go:86] duration metric: took 5.471892ms for pod "coredns-66bc5c9577-sqn2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:58:50.862026  108440 pod_ready.go:83] waiting for pod "etcd-addons-887867" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:58:50.868077  108440 pod_ready.go:94] pod "etcd-addons-887867" is "Ready"
	I1025 08:58:50.868107  108440 pod_ready.go:86] duration metric: took 6.051263ms for pod "etcd-addons-887867" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:58:50.870747  108440 pod_ready.go:83] waiting for pod "kube-apiserver-addons-887867" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:58:50.876827  108440 pod_ready.go:94] pod "kube-apiserver-addons-887867" is "Ready"
	I1025 08:58:50.876858  108440 pod_ready.go:86] duration metric: took 6.069088ms for pod "kube-apiserver-addons-887867" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:58:50.880477  108440 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-887867" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:58:51.254344  108440 pod_ready.go:94] pod "kube-controller-manager-addons-887867" is "Ready"
	I1025 08:58:51.254372  108440 pod_ready.go:86] duration metric: took 373.873031ms for pod "kube-controller-manager-addons-887867" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:58:51.453667  108440 pod_ready.go:83] waiting for pod "kube-proxy-nknsl" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:58:51.852510  108440 pod_ready.go:94] pod "kube-proxy-nknsl" is "Ready"
	I1025 08:58:51.852544  108440 pod_ready.go:86] duration metric: took 398.840622ms for pod "kube-proxy-nknsl" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:58:52.053388  108440 pod_ready.go:83] waiting for pod "kube-scheduler-addons-887867" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:58:52.453430  108440 pod_ready.go:94] pod "kube-scheduler-addons-887867" is "Ready"
	I1025 08:58:52.453464  108440 pod_ready.go:86] duration metric: took 400.048165ms for pod "kube-scheduler-addons-887867" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:58:52.453475  108440 pod_ready.go:40] duration metric: took 1.60479024s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 08:58:52.498833  108440 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 08:58:52.500799  108440 out.go:179] * Done! kubectl is now configured to use "addons-887867" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.806446770Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761382920806418584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74d86e14-81f2-4d63-8046-c3d0edd1e26b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.807208681Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36024f66-fbe8-41a8-a006-02e3a5664b5b name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.807265344Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36024f66-fbe8-41a8-a006-02e3a5664b5b name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.807596047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3cf1f93b88af4f4a2710515d0f322e20f4e54648f905734feaa856295e252e70,PodSandboxId:1c9b598baade61abe0da7c565eecab010befb1912664949a0d4d808f315f5e5f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761382778123559205,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e817a8a-8811-45b8-9c82-daa462869b72,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1cd896fc604fffb84f557c8b4f8c42f298d464e4c2447e4997d5b5c8dfb8d58,PodSandboxId:6544b1410ef361ead52421eac8d09adb45e8f073d1de7582a44ea550a9d74c25,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761382736886015834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f578459-4fa3-4bbc-9671-7d3b637a2250,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d62834eee9620340449536362a0032d6546018ea734d2d7c4f9249eb818ae5e,PodSandboxId:fd933b7655d8a9412afdaea424fe1167e605d97ea5461c2b1bf3863ffb306c3e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761382725757955711,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-xzd4f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6e5c1187-02e8-4d6e-bd92-ea7ce13adc00,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0348c6152450c2bd07b70fc0aff16542bfe920a885a6af71c2efd8a098586ba9,PodSandboxId:715dcab956d54c4a110269da953a352fe587062fc49f9df42fb7c2b5501bff78,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1761382683534873148,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jg7f5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f3041ea6-0e0f-4529-8d17-802a3c24b7b7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f9764a805c5193b1f6c1ff24a8ece83dc361426d4488991c3f1d41b8e41d57,PodSandboxId:c92553e9b4dad157992151df967cb67d84a90e3cc22b3530689c2401dd1b6fe4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761382664148279917,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sf4vz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6c4519c-be7f-4d5e-ae97-00118eb63fb4,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843b5619a6eb5901dc581a693d8ae6c5ffc7afe00e3089abd94d80fafe7b4f52,PodSandboxId:5f8486a194b1fab6a27d9b08bc6ccc7f1aca13970fe352c50d1c57c5c3765179,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761382663891236324,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-vnf5t,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 989130a8-931c-4aed-a69e-d0ab4dac2a74,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6da570c90c31b15648c96b984d012ab7966bb5a8e2ed974ff0c9827da9d2299,PodSandboxId:08240f015d9b63b26a22a5d436e2dfec4e23bc6205080ace8ac209ce8f42131c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761382622776317940,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedd5467-6d60-4f43-b94e-eaa035a33fa6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb5522dbb62d06da8be54fe9182ce2bdd44af1c15da55edf166089243c0f31c,PodSandboxId:509c090c849b2eb13678ce62d1dcfa22f441ae54c2523ca1f4b7e54b62ba428e,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761382583908693508,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xthsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b401c3c-d12c-4107-b50b-be92186820c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d57a37797ad745778974f480eec398972eac03ec86d114fce3c37de9f2d7a19,PodSandboxId:a10f06425a49a136bb95c478193cbf92e4d8c46101136ba3a7a4dd710059c329,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761382581895961529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9b16-a8d0-487b-85f8-67e66a6f8fb4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e4dea32577edec54ae179fff5bf7eb388b29a29c2405f08d1ebe103519fdd89,PodSandboxId:667bd7e1184513e208d19edaed8a590ee94a6198968144116f3a8491ff3ce875,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761382574304691329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqn2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e44fc56e-c094-415f-842c-0264d4cc2754,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3628da16b8a7d26c0955afdf8a62979ed03da0e0975ab0166ed9057ba355c0a,PodSandboxId:1e503acb77c2c6972313aa178f2084f29958dc845c36723bb1490b7e9b9fb0e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761382573423087695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nknsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476a317c-e1e7-41c0-bc57-b8a6de0e4cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c532802bcdc6cebfc76f09fc5b5399201885357d567b1cfa8c7d896ddde8ee2,PodSandboxId:6323b100895ac20f12390dcdf5ba196fec4b71877a2adb05379f76fdf0208bec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761382561670954234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b6d7ac59c910094e64045e75a344ca,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPo
rt\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912d7db702376530334d1bd9e9626c5dfbf10e7476e3ba0b41d7c24139d6264f,PodSandboxId:6a7f1714f1e13add2b3130a829bc7299eaca870bbf0e3390f51c2c504e202c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761382561632701134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f4c51f5b23a47a24c8e588751244aef,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes
.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae27fc5a97bce0b509b0f7273890651b8a28bc0fbf2b48a9d92cf6d0f1ed562b,PodSandboxId:555728eab3fede19ceea30f56bbff25a861f9bb50dd89ecc26dbdec805e75246,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761382561627042652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: fbc5bd133562164472a3678b7e55bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db82a01dddc53de6be4424723a1a183d350df1c85e2fd52d9f60c9686b2c64d,PodSandboxId:547c84b17a5c1bb341f35aa6fcb4907e47ea9400e9986c2ff79af2ddd4dfbdcb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761382561616559649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserv
er,io.kubernetes.pod.name: kube-apiserver-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c4bfcb953fb3970f0280b3792066b34,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36024f66-fbe8-41a8-a006-02e3a5664b5b name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.853536237Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c3792a9-ae90-4ec9-ae25-fc9645c6a3c4 name=/runtime.v1.RuntimeService/Version
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.853979248Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c3792a9-ae90-4ec9-ae25-fc9645c6a3c4 name=/runtime.v1.RuntimeService/Version
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.855233747Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aeecfc57-a631-4f59-9720-038c57dcc369 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.856657621Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761382920856628338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aeecfc57-a631-4f59-9720-038c57dcc369 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.857621432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da412d96-ccfb-4b60-9c22-04289504f1e1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.857831819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da412d96-ccfb-4b60-9c22-04289504f1e1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.858614498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3cf1f93b88af4f4a2710515d0f322e20f4e54648f905734feaa856295e252e70,PodSandboxId:1c9b598baade61abe0da7c565eecab010befb1912664949a0d4d808f315f5e5f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761382778123559205,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e817a8a-8811-45b8-9c82-daa462869b72,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1cd896fc604fffb84f557c8b4f8c42f298d464e4c2447e4997d5b5c8dfb8d58,PodSandboxId:6544b1410ef361ead52421eac8d09adb45e8f073d1de7582a44ea550a9d74c25,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761382736886015834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f578459-4fa3-4bbc-9671-7d3b637a2250,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d62834eee9620340449536362a0032d6546018ea734d2d7c4f9249eb818ae5e,PodSandboxId:fd933b7655d8a9412afdaea424fe1167e605d97ea5461c2b1bf3863ffb306c3e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761382725757955711,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-xzd4f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6e5c1187-02e8-4d6e-bd92-ea7ce13adc00,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0348c6152450c2bd07b70fc0aff16542bfe920a885a6af71c2efd8a098586ba9,PodSandboxId:715dcab956d54c4a110269da953a352fe587062fc49f9df42fb7c2b5501bff78,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1761382683534873148,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jg7f5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f3041ea6-0e0f-4529-8d17-802a3c24b7b7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f9764a805c5193b1f6c1ff24a8ece83dc361426d4488991c3f1d41b8e41d57,PodSandboxId:c92553e9b4dad157992151df967cb67d84a90e3cc22b3530689c2401dd1b6fe4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761382664148279917,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sf4vz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6c4519c-be7f-4d5e-ae97-00118eb63fb4,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843b5619a6eb5901dc581a693d8ae6c5ffc7afe00e3089abd94d80fafe7b4f52,PodSandboxId:5f8486a194b1fab6a27d9b08bc6ccc7f1aca13970fe352c50d1c57c5c3765179,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761382663891236324,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-vnf5t,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 989130a8-931c-4aed-a69e-d0ab4dac2a74,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6da570c90c31b15648c96b984d012ab7966bb5a8e2ed974ff0c9827da9d2299,PodSandboxId:08240f015d9b63b26a22a5d436e2dfec4e23bc6205080ace8ac209ce8f42131c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761382622776317940,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedd5467-6d60-4f43-b94e-eaa035a33fa6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb5522dbb62d06da8be54fe9182ce2bdd44af1c15da55edf166089243c0f31c,PodSandboxId:509c090c849b2eb13678ce62d1dcfa22f441ae54c2523ca1f4b7e54b62ba428e,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761382583908693508,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xthsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b401c3c-d12c-4107-b50b-be92186820c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d57a37797ad745778974f480eec398972eac03ec86d114fce3c37de9f2d7a19,PodSandboxId:a10f06425a49a136bb95c478193cbf92e4d8c46101136ba3a7a4dd710059c329,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761382581895961529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9b16-a8d0-487b-85f8-67e66a6f8fb4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e4dea32577edec54ae179fff5bf7eb388b29a29c2405f08d1ebe103519fdd89,PodSandboxId:667bd7e1184513e208d19edaed8a590ee94a6198968144116f3a8491ff3ce875,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761382574304691329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqn2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e44fc56e-c094-415f-842c-0264d4cc2754,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3628da16b8a7d26c0955afdf8a62979ed03da0e0975ab0166ed9057ba355c0a,PodSandboxId:1e503acb77c2c6972313aa178f2084f29958dc845c36723bb1490b7e9b9fb0e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761382573423087695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nknsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476a317c-e1e7-41c0-bc57-b8a6de0e4cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c532802bcdc6cebfc76f09fc5b5399201885357d567b1cfa8c7d896ddde8ee2,PodSandboxId:6323b100895ac20f12390dcdf5ba196fec4b71877a2adb05379f76fdf0208bec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761382561670954234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b6d7ac59c910094e64045e75a344ca,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPo
rt\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912d7db702376530334d1bd9e9626c5dfbf10e7476e3ba0b41d7c24139d6264f,PodSandboxId:6a7f1714f1e13add2b3130a829bc7299eaca870bbf0e3390f51c2c504e202c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761382561632701134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f4c51f5b23a47a24c8e588751244aef,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes
.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae27fc5a97bce0b509b0f7273890651b8a28bc0fbf2b48a9d92cf6d0f1ed562b,PodSandboxId:555728eab3fede19ceea30f56bbff25a861f9bb50dd89ecc26dbdec805e75246,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761382561627042652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: fbc5bd133562164472a3678b7e55bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db82a01dddc53de6be4424723a1a183d350df1c85e2fd52d9f60c9686b2c64d,PodSandboxId:547c84b17a5c1bb341f35aa6fcb4907e47ea9400e9986c2ff79af2ddd4dfbdcb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761382561616559649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserv
er,io.kubernetes.pod.name: kube-apiserver-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c4bfcb953fb3970f0280b3792066b34,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da412d96-ccfb-4b60-9c22-04289504f1e1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.903585025Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35f21f5a-e3a5-467d-a366-c9b247a3a6d8 name=/runtime.v1.RuntimeService/Version
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.903718613Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35f21f5a-e3a5-467d-a366-c9b247a3a6d8 name=/runtime.v1.RuntimeService/Version
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.905144889Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86b65ddd-dcd4-4ddc-b1f0-1db6f719ed49 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.906735168Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761382920906705979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86b65ddd-dcd4-4ddc-b1f0-1db6f719ed49 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.907455966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77ee9adb-b0f6-4963-ad88-4672f70427b9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.907520624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77ee9adb-b0f6-4963-ad88-4672f70427b9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.907963870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3cf1f93b88af4f4a2710515d0f322e20f4e54648f905734feaa856295e252e70,PodSandboxId:1c9b598baade61abe0da7c565eecab010befb1912664949a0d4d808f315f5e5f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761382778123559205,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e817a8a-8811-45b8-9c82-daa462869b72,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1cd896fc604fffb84f557c8b4f8c42f298d464e4c2447e4997d5b5c8dfb8d58,PodSandboxId:6544b1410ef361ead52421eac8d09adb45e8f073d1de7582a44ea550a9d74c25,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761382736886015834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f578459-4fa3-4bbc-9671-7d3b637a2250,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d62834eee9620340449536362a0032d6546018ea734d2d7c4f9249eb818ae5e,PodSandboxId:fd933b7655d8a9412afdaea424fe1167e605d97ea5461c2b1bf3863ffb306c3e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761382725757955711,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-xzd4f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6e5c1187-02e8-4d6e-bd92-ea7ce13adc00,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0348c6152450c2bd07b70fc0aff16542bfe920a885a6af71c2efd8a098586ba9,PodSandboxId:715dcab956d54c4a110269da953a352fe587062fc49f9df42fb7c2b5501bff78,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1761382683534873148,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jg7f5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f3041ea6-0e0f-4529-8d17-802a3c24b7b7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f9764a805c5193b1f6c1ff24a8ece83dc361426d4488991c3f1d41b8e41d57,PodSandboxId:c92553e9b4dad157992151df967cb67d84a90e3cc22b3530689c2401dd1b6fe4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761382664148279917,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sf4vz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6c4519c-be7f-4d5e-ae97-00118eb63fb4,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843b5619a6eb5901dc581a693d8ae6c5ffc7afe00e3089abd94d80fafe7b4f52,PodSandboxId:5f8486a194b1fab6a27d9b08bc6ccc7f1aca13970fe352c50d1c57c5c3765179,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761382663891236324,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-vnf5t,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 989130a8-931c-4aed-a69e-d0ab4dac2a74,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6da570c90c31b15648c96b984d012ab7966bb5a8e2ed974ff0c9827da9d2299,PodSandboxId:08240f015d9b63b26a22a5d436e2dfec4e23bc6205080ace8ac209ce8f42131c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761382622776317940,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedd5467-6d60-4f43-b94e-eaa035a33fa6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb5522dbb62d06da8be54fe9182ce2bdd44af1c15da55edf166089243c0f31c,PodSandboxId:509c090c849b2eb13678ce62d1dcfa22f441ae54c2523ca1f4b7e54b62ba428e,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761382583908693508,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xthsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b401c3c-d12c-4107-b50b-be92186820c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d57a37797ad745778974f480eec398972eac03ec86d114fce3c37de9f2d7a19,PodSandboxId:a10f06425a49a136bb95c478193cbf92e4d8c46101136ba3a7a4dd710059c329,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761382581895961529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9b16-a8d0-487b-85f8-67e66a6f8fb4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e4dea32577edec54ae179fff5bf7eb388b29a29c2405f08d1ebe103519fdd89,PodSandboxId:667bd7e1184513e208d19edaed8a590ee94a6198968144116f3a8491ff3ce875,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761382574304691329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqn2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e44fc56e-c094-415f-842c-0264d4cc2754,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3628da16b8a7d26c0955afdf8a62979ed03da0e0975ab0166ed9057ba355c0a,PodSandboxId:1e503acb77c2c6972313aa178f2084f29958dc845c36723bb1490b7e9b9fb0e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761382573423087695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nknsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476a317c-e1e7-41c0-bc57-b8a6de0e4cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c532802bcdc6cebfc76f09fc5b5399201885357d567b1cfa8c7d896ddde8ee2,PodSandboxId:6323b100895ac20f12390dcdf5ba196fec4b71877a2adb05379f76fdf0208bec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761382561670954234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b6d7ac59c910094e64045e75a344ca,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPo
rt\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912d7db702376530334d1bd9e9626c5dfbf10e7476e3ba0b41d7c24139d6264f,PodSandboxId:6a7f1714f1e13add2b3130a829bc7299eaca870bbf0e3390f51c2c504e202c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761382561632701134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f4c51f5b23a47a24c8e588751244aef,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes
.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae27fc5a97bce0b509b0f7273890651b8a28bc0fbf2b48a9d92cf6d0f1ed562b,PodSandboxId:555728eab3fede19ceea30f56bbff25a861f9bb50dd89ecc26dbdec805e75246,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761382561627042652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: fbc5bd133562164472a3678b7e55bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db82a01dddc53de6be4424723a1a183d350df1c85e2fd52d9f60c9686b2c64d,PodSandboxId:547c84b17a5c1bb341f35aa6fcb4907e47ea9400e9986c2ff79af2ddd4dfbdcb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761382561616559649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserv
er,io.kubernetes.pod.name: kube-apiserver-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c4bfcb953fb3970f0280b3792066b34,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77ee9adb-b0f6-4963-ad88-4672f70427b9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.945448777Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5b92c83-415f-4e28-97d8-8d6cf9f829be name=/runtime.v1.RuntimeService/Version
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.945536878Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5b92c83-415f-4e28-97d8-8d6cf9f829be name=/runtime.v1.RuntimeService/Version
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.947015276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94646cbd-e638-4568-b0e7-30e3668c80a8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.948335756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761382920948307537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94646cbd-e638-4568-b0e7-30e3668c80a8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.948924717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1efc7986-d01b-4a57-9b2a-f4ff1cb58b13 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.948995108Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1efc7986-d01b-4a57-9b2a-f4ff1cb58b13 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:02:00 addons-887867 crio[821]: time="2025-10-25 09:02:00.949346865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3cf1f93b88af4f4a2710515d0f322e20f4e54648f905734feaa856295e252e70,PodSandboxId:1c9b598baade61abe0da7c565eecab010befb1912664949a0d4d808f315f5e5f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761382778123559205,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e817a8a-8811-45b8-9c82-daa462869b72,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1cd896fc604fffb84f557c8b4f8c42f298d464e4c2447e4997d5b5c8dfb8d58,PodSandboxId:6544b1410ef361ead52421eac8d09adb45e8f073d1de7582a44ea550a9d74c25,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761382736886015834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f578459-4fa3-4bbc-9671-7d3b637a2250,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d62834eee9620340449536362a0032d6546018ea734d2d7c4f9249eb818ae5e,PodSandboxId:fd933b7655d8a9412afdaea424fe1167e605d97ea5461c2b1bf3863ffb306c3e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761382725757955711,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-xzd4f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6e5c1187-02e8-4d6e-bd92-ea7ce13adc00,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0348c6152450c2bd07b70fc0aff16542bfe920a885a6af71c2efd8a098586ba9,PodSandboxId:715dcab956d54c4a110269da953a352fe587062fc49f9df42fb7c2b5501bff78,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1761382683534873148,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jg7f5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f3041ea6-0e0f-4529-8d17-802a3c24b7b7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9f9764a805c5193b1f6c1ff24a8ece83dc361426d4488991c3f1d41b8e41d57,PodSandboxId:c92553e9b4dad157992151df967cb67d84a90e3cc22b3530689c2401dd1b6fe4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761382664148279917,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sf4vz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6c4519c-be7f-4d5e-ae97-00118eb63fb4,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843b5619a6eb5901dc581a693d8ae6c5ffc7afe00e3089abd94d80fafe7b4f52,PodSandboxId:5f8486a194b1fab6a27d9b08bc6ccc7f1aca13970fe352c50d1c57c5c3765179,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761382663891236324,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-vnf5t,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 989130a8-931c-4aed-a69e-d0ab4dac2a74,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6da570c90c31b15648c96b984d012ab7966bb5a8e2ed974ff0c9827da9d2299,PodSandboxId:08240f015d9b63b26a22a5d436e2dfec4e23bc6205080ace8ac209ce8f42131c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761382622776317940,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedd5467-6d60-4f43-b94e-eaa035a33fa6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb5522dbb62d06da8be54fe9182ce2bdd44af1c15da55edf166089243c0f31c,PodSandboxId:509c090c849b2eb13678ce62d1dcfa22f441ae54c2523ca1f4b7e54b62ba428e,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761382583908693508,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xthsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b401c3c-d12c-4107-b50b-be92186820c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d57a37797ad745778974f480eec398972eac03ec86d114fce3c37de9f2d7a19,PodSandboxId:a10f06425a49a136bb95c478193cbf92e4d8c46101136ba3a7a4dd710059c329,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761382581895961529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46dd9b16-a8d0-487b-85f8-67e66a6f8fb4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e4dea32577edec54ae179fff5bf7eb388b29a29c2405f08d1ebe103519fdd89,PodSandboxId:667bd7e1184513e208d19edaed8a590ee94a6198968144116f3a8491ff3ce875,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761382574304691329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sqn2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e44fc56e-c094-415f-842c-0264d4cc2754,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3628da16b8a7d26c0955afdf8a62979ed03da0e0975ab0166ed9057ba355c0a,PodSandboxId:1e503acb77c2c6972313aa178f2084f29958dc845c36723bb1490b7e9b9fb0e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761382573423087695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nknsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476a317c-e1e7-41c0-bc57-b8a6de0e4cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c532802bcdc6cebfc76f09fc5b5399201885357d567b1cfa8c7d896ddde8ee2,PodSandboxId:6323b100895ac20f12390dcdf5ba196fec4b71877a2adb05379f76fdf0208bec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761382561670954234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b6d7ac59c910094e64045e75a344ca,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPo
rt\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912d7db702376530334d1bd9e9626c5dfbf10e7476e3ba0b41d7c24139d6264f,PodSandboxId:6a7f1714f1e13add2b3130a829bc7299eaca870bbf0e3390f51c2c504e202c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761382561632701134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f4c51f5b23a47a24c8e588751244aef,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes
.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae27fc5a97bce0b509b0f7273890651b8a28bc0fbf2b48a9d92cf6d0f1ed562b,PodSandboxId:555728eab3fede19ceea30f56bbff25a861f9bb50dd89ecc26dbdec805e75246,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761382561627042652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: fbc5bd133562164472a3678b7e55bb8d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db82a01dddc53de6be4424723a1a183d350df1c85e2fd52d9f60c9686b2c64d,PodSandboxId:547c84b17a5c1bb341f35aa6fcb4907e47ea9400e9986c2ff79af2ddd4dfbdcb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761382561616559649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserv
er,io.kubernetes.pod.name: kube-apiserver-addons-887867,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c4bfcb953fb3970f0280b3792066b34,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1efc7986-d01b-4a57-9b2a-f4ff1cb58b13 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3cf1f93b88af4       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago       Running             nginx                     0                   1c9b598baade6       nginx
	c1cd896fc604f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   6544b1410ef36       busybox
	7d62834eee962       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago       Running             controller                0                   fd933b7655d8a       ingress-nginx-controller-675c5ddd98-xzd4f
	0348c6152450c       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                             3 minutes ago       Exited              patch                     3                   715dcab956d54       ingress-nginx-admission-patch-jg7f5
	a9f9764a805c5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              create                    0                   c92553e9b4dad       ingress-nginx-admission-create-sf4vz
	843b5619a6eb5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   5f8486a194b1f       gadget-vnf5t
	f6da570c90c31       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   08240f015d9b6       kube-ingress-dns-minikube
	aeb5522dbb62d       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   509c090c849b2       amd-gpu-device-plugin-xthsd
	5d57a37797ad7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   a10f06425a49a       storage-provisioner
	0e4dea32577ed       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   667bd7e118451       coredns-66bc5c9577-sqn2j
	e3628da16b8a7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   1e503acb77c2c       kube-proxy-nknsl
	4c532802bcdc6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   6323b100895ac       kube-scheduler-addons-887867
	912d7db702376       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   6a7f1714f1e13       etcd-addons-887867
	ae27fc5a97bce       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   555728eab3fed       kube-controller-manager-addons-887867
	9db82a01dddc5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   547c84b17a5c1       kube-apiserver-addons-887867
	
	
	==> coredns [0e4dea32577edec54ae179fff5bf7eb388b29a29c2405f08d1ebe103519fdd89] <==
	[INFO] 10.244.0.8:48782 - 48211 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000087991s
	[INFO] 10.244.0.8:48782 - 51084 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000087282s
	[INFO] 10.244.0.8:48782 - 25323 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000087344s
	[INFO] 10.244.0.8:48782 - 6539 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00006528s
	[INFO] 10.244.0.8:48782 - 25298 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000071638s
	[INFO] 10.244.0.8:48782 - 16320 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000153721s
	[INFO] 10.244.0.8:48782 - 8654 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000066708s
	[INFO] 10.244.0.8:36613 - 6783 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155907s
	[INFO] 10.244.0.8:36613 - 7032 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000180915s
	[INFO] 10.244.0.8:36650 - 33571 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114755s
	[INFO] 10.244.0.8:36650 - 33346 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000169925s
	[INFO] 10.244.0.8:58977 - 9345 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000154525s
	[INFO] 10.244.0.8:58977 - 9146 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000188175s
	[INFO] 10.244.0.8:36005 - 6096 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000120225s
	[INFO] 10.244.0.8:36005 - 6277 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000104144s
	[INFO] 10.244.0.23:60153 - 34200 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000363144s
	[INFO] 10.244.0.23:33560 - 39359 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001504088s
	[INFO] 10.244.0.23:44254 - 48022 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129842s
	[INFO] 10.244.0.23:33666 - 29769 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000273093s
	[INFO] 10.244.0.23:53644 - 51185 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105143s
	[INFO] 10.244.0.23:37230 - 50286 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000241999s
	[INFO] 10.244.0.23:59031 - 13411 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004259462s
	[INFO] 10.244.0.23:34361 - 10397 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.003910488s
	[INFO] 10.244.0.27:58185 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001075309s
	[INFO] 10.244.0.27:45393 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000263572s
	
	
	==> describe nodes <==
	Name:               addons-887867
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-887867
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=addons-887867
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T08_56_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-887867
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 08:56:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-887867
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:01:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:00:12 +0000   Sat, 25 Oct 2025 08:56:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:00:12 +0000   Sat, 25 Oct 2025 08:56:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:00:12 +0000   Sat, 25 Oct 2025 08:56:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:00:12 +0000   Sat, 25 Oct 2025 08:56:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.204
	  Hostname:    addons-887867
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 9cdc64782b314b9980de837fd60d2fae
	  System UUID:                9cdc6478-2b31-4b99-80de-837fd60d2fae
	  Boot ID:                    550c36ea-d302-4a7d-a629-18fe69814d4f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     hello-world-app-5d498dc89-xr9sm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-vnf5t                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-xzd4f    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m41s
	  kube-system                 amd-gpu-device-plugin-xthsd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 coredns-66bc5c9577-sqn2j                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m49s
	  kube-system                 etcd-addons-887867                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m54s
	  kube-system                 kube-apiserver-addons-887867                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 kube-controller-manager-addons-887867        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-proxy-nknsl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-scheduler-addons-887867                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m46s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m1s (x8 over 6m1s)  kubelet          Node addons-887867 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s (x8 over 6m1s)  kubelet          Node addons-887867 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x7 over 6m1s)  kubelet          Node addons-887867 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m54s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m54s                kubelet          Node addons-887867 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m54s                kubelet          Node addons-887867 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m54s                kubelet          Node addons-887867 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m53s                kubelet          Node addons-887867 status is now: NodeReady
	  Normal  RegisteredNode           5m50s                node-controller  Node addons-887867 event: Registered Node addons-887867 in Controller
	
	
	==> dmesg <==
	[  +7.028177] kauditd_printk_skb: 5 callbacks suppressed
	[Oct25 08:57] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.499225] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.522144] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.086361] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.005295] kauditd_printk_skb: 6 callbacks suppressed
	[  +9.382944] kauditd_printk_skb: 51 callbacks suppressed
	[  +3.647440] kauditd_printk_skb: 170 callbacks suppressed
	[Oct25 08:58] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.000021] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.000061] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.635616] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.434276] kauditd_printk_skb: 47 callbacks suppressed
	[Oct25 08:59] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.048315] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.880783] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000074] kauditd_printk_skb: 114 callbacks suppressed
	[  +0.685964] kauditd_printk_skb: 173 callbacks suppressed
	[  +1.183992] kauditd_printk_skb: 165 callbacks suppressed
	[  +4.049743] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.231530] kauditd_printk_skb: 26 callbacks suppressed
	[Oct25 09:00] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000048] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.715370] kauditd_printk_skb: 41 callbacks suppressed
	[Oct25 09:01] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [912d7db702376530334d1bd9e9626c5dfbf10e7476e3ba0b41d7c24139d6264f] <==
	{"level":"info","ts":"2025-10-25T08:57:45.620406Z","caller":"traceutil/trace.go:172","msg":"trace[370170024] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"196.231107ms","start":"2025-10-25T08:57:45.424157Z","end":"2025-10-25T08:57:45.620389Z","steps":["trace[370170024] 'process raft request'  (duration: 110.236642ms)","trace[370170024] 'compare'  (duration: 85.738257ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T08:57:50.253614Z","caller":"traceutil/trace.go:172","msg":"trace[1464063206] linearizableReadLoop","detail":"{readStateIndex:1218; appliedIndex:1218; }","duration":"131.591779ms","start":"2025-10-25T08:57:50.122004Z","end":"2025-10-25T08:57:50.253596Z","steps":["trace[1464063206] 'read index received'  (duration: 131.587625ms)","trace[1464063206] 'applied index is now lower than readState.Index'  (duration: 3.329µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T08:57:50.253927Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.898229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T08:57:50.253976Z","caller":"traceutil/trace.go:172","msg":"trace[1368944218] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1183; }","duration":"131.965467ms","start":"2025-10-25T08:57:50.122000Z","end":"2025-10-25T08:57:50.253965Z","steps":["trace[1368944218] 'agreement among raft nodes before linearized reading'  (duration: 131.870764ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:57:50.254191Z","caller":"traceutil/trace.go:172","msg":"trace[1965380645] transaction","detail":"{read_only:false; response_revision:1184; number_of_response:1; }","duration":"192.876537ms","start":"2025-10-25T08:57:50.061304Z","end":"2025-10-25T08:57:50.254180Z","steps":["trace[1965380645] 'process raft request'  (duration: 192.445193ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T08:57:52.009283Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.400491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T08:57:52.009374Z","caller":"traceutil/trace.go:172","msg":"trace[1217081019] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1192; }","duration":"202.484413ms","start":"2025-10-25T08:57:51.806858Z","end":"2025-10-25T08:57:52.009343Z","steps":["trace[1217081019] 'range keys from in-memory index tree'  (duration: 202.271322ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T08:57:52.009611Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"180.24706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T08:57:52.009634Z","caller":"traceutil/trace.go:172","msg":"trace[2036881060] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1192; }","duration":"180.274293ms","start":"2025-10-25T08:57:51.829353Z","end":"2025-10-25T08:57:52.009627Z","steps":["trace[2036881060] 'range keys from in-memory index tree'  (duration: 180.196649ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:59:16.352876Z","caller":"traceutil/trace.go:172","msg":"trace[1011272086] transaction","detail":"{read_only:false; response_revision:1467; number_of_response:1; }","duration":"194.94785ms","start":"2025-10-25T08:59:16.157854Z","end":"2025-10-25T08:59:16.352802Z","steps":["trace[1011272086] 'process raft request'  (duration: 194.53554ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T08:59:17.941314Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.346054ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-85b7d694d7-ghqsd\" limit:1 ","response":"range_response_count:1 size:4659"}
	{"level":"info","ts":"2025-10-25T08:59:17.948319Z","caller":"traceutil/trace.go:172","msg":"trace[1328283640] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-85b7d694d7-ghqsd; range_end:; response_count:1; response_revision:1478; }","duration":"113.393354ms","start":"2025-10-25T08:59:17.834907Z","end":"2025-10-25T08:59:17.948300Z","steps":["trace[1328283640] 'agreement among raft nodes before linearized reading'  (duration: 43.52841ms)","trace[1328283640] 'range keys from in-memory index tree'  (duration: 62.78457ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T08:59:17.943129Z","caller":"traceutil/trace.go:172","msg":"trace[1298468983] transaction","detail":"{read_only:false; response_revision:1479; number_of_response:1; }","duration":"140.797631ms","start":"2025-10-25T08:59:17.802312Z","end":"2025-10-25T08:59:17.943110Z","steps":["trace[1298468983] 'process raft request'  (duration: 76.166293ms)","trace[1298468983] 'compare'  (duration: 62.684079ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T08:59:17.943989Z","caller":"traceutil/trace.go:172","msg":"trace[1129975612] transaction","detail":"{read_only:false; response_revision:1480; number_of_response:1; }","duration":"140.341792ms","start":"2025-10-25T08:59:17.803636Z","end":"2025-10-25T08:59:17.943978Z","steps":["trace[1129975612] 'process raft request'  (duration: 139.835209ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:59:18.132293Z","caller":"traceutil/trace.go:172","msg":"trace[461411583] linearizableReadLoop","detail":"{readStateIndex:1540; appliedIndex:1540; }","duration":"105.830083ms","start":"2025-10-25T08:59:18.026380Z","end":"2025-10-25T08:59:18.132210Z","steps":["trace[461411583] 'read index received'  (duration: 105.823504ms)","trace[461411583] 'applied index is now lower than readState.Index'  (duration: 5.657µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T08:59:18.140344Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.97137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:metrics-server\" limit:1 ","response":"range_response_count:1 size:1042"}
	{"level":"info","ts":"2025-10-25T08:59:18.140402Z","caller":"traceutil/trace.go:172","msg":"trace[686880033] range","detail":"{range_begin:/registry/clusterroles/system:metrics-server; range_end:; response_count:1; response_revision:1484; }","duration":"114.027948ms","start":"2025-10-25T08:59:18.026353Z","end":"2025-10-25T08:59:18.140381Z","steps":["trace[686880033] 'agreement among raft nodes before linearized reading'  (duration: 106.102908ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:59:18.141209Z","caller":"traceutil/trace.go:172","msg":"trace[1288590320] transaction","detail":"{read_only:false; response_revision:1486; number_of_response:1; }","duration":"116.016032ms","start":"2025-10-25T08:59:18.025184Z","end":"2025-10-25T08:59:18.141200Z","steps":["trace[1288590320] 'process raft request'  (duration: 115.9736ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:59:18.142455Z","caller":"traceutil/trace.go:172","msg":"trace[1415302581] transaction","detail":"{read_only:false; response_revision:1485; number_of_response:1; }","duration":"122.749232ms","start":"2025-10-25T08:59:18.019694Z","end":"2025-10-25T08:59:18.142443Z","steps":["trace[1415302581] 'process raft request'  (duration: 112.499991ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T08:59:18.143743Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.789197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T08:59:18.146144Z","caller":"traceutil/trace.go:172","msg":"trace[1777755780] range","detail":"{range_begin:/registry/controllerrevisions; range_end:; response_count:0; response_revision:1486; }","duration":"105.200151ms","start":"2025-10-25T08:59:18.040935Z","end":"2025-10-25T08:59:18.146136Z","steps":["trace[1777755780] 'agreement among raft nodes before linearized reading'  (duration: 102.766814ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:59:33.710948Z","caller":"traceutil/trace.go:172","msg":"trace[1839483559] linearizableReadLoop","detail":"{readStateIndex:1745; appliedIndex:1745; }","duration":"118.891409ms","start":"2025-10-25T08:59:33.592039Z","end":"2025-10-25T08:59:33.710930Z","steps":["trace[1839483559] 'read index received'  (duration: 118.883923ms)","trace[1839483559] 'applied index is now lower than readState.Index'  (duration: 6.209µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T08:59:33.711087Z","caller":"traceutil/trace.go:172","msg":"trace[1140853010] transaction","detail":"{read_only:false; response_revision:1679; number_of_response:1; }","duration":"158.27715ms","start":"2025-10-25T08:59:33.552799Z","end":"2025-10-25T08:59:33.711076Z","steps":["trace[1140853010] 'process raft request'  (duration: 158.187323ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T08:59:33.711218Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.087698ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T08:59:33.711243Z","caller":"traceutil/trace.go:172","msg":"trace[721723794] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1679; }","duration":"119.202571ms","start":"2025-10-25T08:59:33.592033Z","end":"2025-10-25T08:59:33.711236Z","steps":["trace[721723794] 'agreement among raft nodes before linearized reading'  (duration: 119.064697ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:02:01 up 6 min,  0 users,  load average: 0.34, 0.95, 0.58
	Linux addons-887867 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [9db82a01dddc53de6be4424723a1a183d350df1c85e2fd52d9f60c9686b2c64d] <==
	E1025 08:57:13.790112       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.176.185:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.176.185:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.176.185:443: connect: connection refused" logger="UnhandledError"
	I1025 08:57:13.894333       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 08:59:03.325557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.204:8443->192.168.39.1:50290: use of closed network connection
	E1025 08:59:03.525709       1 conn.go:339] Error on socket receive: read tcp 192.168.39.204:8443->192.168.39.1:50328: use of closed network connection
	I1025 08:59:12.737111       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.184.255"}
	I1025 08:59:33.411495       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 08:59:33.764931       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.97.179"}
	E1025 08:59:48.261278       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1025 09:00:00.143343       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1025 09:00:14.806099       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1025 09:00:28.406018       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 09:00:28.406224       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 09:00:28.449167       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 09:00:28.449226       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 09:00:28.460183       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 09:00:28.460277       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 09:00:28.482667       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 09:00:28.483455       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E1025 09:00:28.506076       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	I1025 09:00:28.511272       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 09:00:28.511320       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1025 09:00:29.460255       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1025 09:00:29.511511       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1025 09:00:29.558873       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1025 09:01:59.681514       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.231.115"}
	
	
	==> kube-controller-manager [ae27fc5a97bce0b509b0f7273890651b8a28bc0fbf2b48a9d92cf6d0f1ed562b] <==
	E1025 09:00:38.276806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:00:38.839354       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:00:38.840410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1025 09:00:42.733079       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1025 09:00:42.733136       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:00:42.794347       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 09:00:42.794393       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:00:46.628810       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:00:46.630379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:00:48.511320       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:00:48.512507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:00:48.902625       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:00:48.903618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:01:00.660447       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:01:00.661578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:01:08.334422       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:01:08.335912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:01:10.896101       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:01:10.897300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:01:30.300987       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:01:30.302242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:01:41.060823       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:01:41.061857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:01:45.538943       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:01:45.540283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [e3628da16b8a7d26c0955afdf8a62979ed03da0e0975ab0166ed9057ba355c0a] <==
	I1025 08:56:14.189472       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 08:56:14.289673       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 08:56:14.289708       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.204"]
	E1025 08:56:14.289868       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 08:56:14.613344       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1025 08:56:14.619873       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 08:56:14.619908       1 server_linux.go:132] "Using iptables Proxier"
	I1025 08:56:14.755363       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 08:56:14.755615       1 server.go:527] "Version info" version="v1.34.1"
	I1025 08:56:14.755640       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 08:56:14.785819       1 config.go:200] "Starting service config controller"
	I1025 08:56:14.785848       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 08:56:14.785871       1 config.go:106] "Starting endpoint slice config controller"
	I1025 08:56:14.785875       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 08:56:14.785893       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 08:56:14.785897       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 08:56:14.787840       1 config.go:309] "Starting node config controller"
	I1025 08:56:14.787868       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 08:56:14.787876       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 08:56:14.886033       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 08:56:14.886072       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 08:56:14.886371       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4c532802bcdc6cebfc76f09fc5b5399201885357d567b1cfa8c7d896ddde8ee2] <==
	E1025 08:56:04.622230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:56:04.622288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 08:56:04.622338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 08:56:04.622388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 08:56:04.622491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:56:04.622532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:56:04.623064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 08:56:04.623165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 08:56:04.622194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 08:56:04.623270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 08:56:04.623324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 08:56:04.623373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 08:56:05.437717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:56:05.563069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 08:56:05.572224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 08:56:05.588731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 08:56:05.639161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:56:05.656803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 08:56:05.753217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 08:56:05.873585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 08:56:05.905076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:56:05.954565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 08:56:05.982804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 08:56:06.003431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1025 08:56:07.812537       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:00:31 addons-887867 kubelet[1505]: I1025 09:00:31.638216    1505 scope.go:117] "RemoveContainer" containerID="f6e2a88e087c43066a91fdafcde5074f120e39a53d81fd0e1ea836090392392e"
	Oct 25 09:00:31 addons-887867 kubelet[1505]: E1025 09:00:31.639529    1505 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e2a88e087c43066a91fdafcde5074f120e39a53d81fd0e1ea836090392392e\": container with ID starting with f6e2a88e087c43066a91fdafcde5074f120e39a53d81fd0e1ea836090392392e not found: ID does not exist" containerID="f6e2a88e087c43066a91fdafcde5074f120e39a53d81fd0e1ea836090392392e"
	Oct 25 09:00:31 addons-887867 kubelet[1505]: I1025 09:00:31.639583    1505 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e2a88e087c43066a91fdafcde5074f120e39a53d81fd0e1ea836090392392e"} err="failed to get container status \"f6e2a88e087c43066a91fdafcde5074f120e39a53d81fd0e1ea836090392392e\": rpc error: code = NotFound desc = could not find container \"f6e2a88e087c43066a91fdafcde5074f120e39a53d81fd0e1ea836090392392e\": container with ID starting with f6e2a88e087c43066a91fdafcde5074f120e39a53d81fd0e1ea836090392392e not found: ID does not exist"
	Oct 25 09:00:37 addons-887867 kubelet[1505]: E1025 09:00:37.762394    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761382837761837536  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:00:37 addons-887867 kubelet[1505]: E1025 09:00:37.762425    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761382837761837536  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:00:47 addons-887867 kubelet[1505]: E1025 09:00:47.765108    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761382847764695854  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:00:47 addons-887867 kubelet[1505]: E1025 09:00:47.765177    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761382847764695854  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:00:57 addons-887867 kubelet[1505]: E1025 09:00:57.768648    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761382857768124868  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:00:57 addons-887867 kubelet[1505]: E1025 09:00:57.768701    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761382857768124868  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:01:07 addons-887867 kubelet[1505]: E1025 09:01:07.771912    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761382867771279781  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:01:07 addons-887867 kubelet[1505]: E1025 09:01:07.771956    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761382867771279781  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:01:17 addons-887867 kubelet[1505]: E1025 09:01:17.774878    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761382877774233972  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:01:17 addons-887867 kubelet[1505]: E1025 09:01:17.775298    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761382877774233972  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:01:23 addons-887867 kubelet[1505]: I1025 09:01:23.518121    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-sqn2j" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:01:25 addons-887867 kubelet[1505]: I1025 09:01:25.518257    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:01:27 addons-887867 kubelet[1505]: E1025 09:01:27.778125    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761382887777712080  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:01:27 addons-887867 kubelet[1505]: E1025 09:01:27.778176    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761382887777712080  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:01:37 addons-887867 kubelet[1505]: E1025 09:01:37.782829    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761382897782188514  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:01:37 addons-887867 kubelet[1505]: E1025 09:01:37.782897    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761382897782188514  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:01:45 addons-887867 kubelet[1505]: I1025 09:01:45.525106    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xthsd" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:01:47 addons-887867 kubelet[1505]: E1025 09:01:47.785901    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761382907785426074  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:01:47 addons-887867 kubelet[1505]: E1025 09:01:47.785941    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761382907785426074  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:01:57 addons-887867 kubelet[1505]: E1025 09:01:57.789605    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761382917788735343  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:01:57 addons-887867 kubelet[1505]: E1025 09:01:57.789894    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761382917788735343  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 25 09:01:59 addons-887867 kubelet[1505]: I1025 09:01:59.673987    1505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gwlm\" (UniqueName: \"kubernetes.io/projected/1bb42631-2640-447b-8f6e-59628e8103cc-kube-api-access-9gwlm\") pod \"hello-world-app-5d498dc89-xr9sm\" (UID: \"1bb42631-2640-447b-8f6e-59628e8103cc\") " pod="default/hello-world-app-5d498dc89-xr9sm"
	
	
	==> storage-provisioner [5d57a37797ad745778974f480eec398972eac03ec86d114fce3c37de9f2d7a19] <==
	W1025 09:01:37.220420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:39.223979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:39.231852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:41.235718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:41.242314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:43.247165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:43.255424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:45.258887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:45.265025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:47.269184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:47.274826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:49.277952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:49.286073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:51.290651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:51.296453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:53.300533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:53.305490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:55.309418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:55.314608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:57.318170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:57.326132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:59.329376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:01:59.335266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:02:01.339284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:02:01.345389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-887867 -n addons-887867
helpers_test.go:269: (dbg) Run:  kubectl --context addons-887867 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-xr9sm ingress-nginx-admission-create-sf4vz ingress-nginx-admission-patch-jg7f5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-887867 describe pod hello-world-app-5d498dc89-xr9sm ingress-nginx-admission-create-sf4vz ingress-nginx-admission-patch-jg7f5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-887867 describe pod hello-world-app-5d498dc89-xr9sm ingress-nginx-admission-create-sf4vz ingress-nginx-admission-patch-jg7f5: exit status 1 (73.914967ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-xr9sm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-887867/192.168.39.204
	Start Time:       Sat, 25 Oct 2025 09:01:59 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9gwlm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9gwlm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-xr9sm to addons-887867
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sf4vz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jg7f5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-887867 describe pod hello-world-app-5d498dc89-xr9sm ingress-nginx-admission-create-sf4vz ingress-nginx-admission-patch-jg7f5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-887867 addons disable ingress-dns --alsologtostderr -v=1: (1.100714685s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-887867 addons disable ingress --alsologtostderr -v=1: (7.84995231s)
--- FAIL: TestAddons/parallel/Ingress (158.01s)

                                                
                                    
x
+
TestPreload (137.27s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-367687 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-367687 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m7.986089479s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-367687 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-367687 image pull gcr.io/k8s-minikube/busybox: (3.283889531s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-367687
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-367687: (6.84498957s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-367687 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1025 09:46:37.048523  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-367687 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (56.335706216s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-367687 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-25 09:47:07.260907037 +0000 UTC m=+3140.414023029
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-367687 -n test-preload-367687
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-367687 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-367687 logs -n 25: (1.121922123s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-530815 ssh -n multinode-530815-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ ssh     │ multinode-530815 ssh -n multinode-530815 sudo cat /home/docker/cp-test_multinode-530815-m03_multinode-530815.txt                                          │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ cp      │ multinode-530815 cp multinode-530815-m03:/home/docker/cp-test.txt multinode-530815-m02:/home/docker/cp-test_multinode-530815-m03_multinode-530815-m02.txt │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ ssh     │ multinode-530815 ssh -n multinode-530815-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ ssh     │ multinode-530815 ssh -n multinode-530815-m02 sudo cat /home/docker/cp-test_multinode-530815-m03_multinode-530815-m02.txt                                  │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ node    │ multinode-530815 node stop m03                                                                                                                            │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ node    │ multinode-530815 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ node    │ list -p multinode-530815                                                                                                                                  │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ stop    │ -p multinode-530815                                                                                                                                       │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:37 UTC │
	│ start   │ -p multinode-530815 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:37 UTC │ 25 Oct 25 09:39 UTC │
	│ node    │ list -p multinode-530815                                                                                                                                  │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:39 UTC │                     │
	│ node    │ multinode-530815 node delete m03                                                                                                                          │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:39 UTC │ 25 Oct 25 09:39 UTC │
	│ stop    │ multinode-530815 stop                                                                                                                                     │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:39 UTC │ 25 Oct 25 09:42 UTC │
	│ start   │ -p multinode-530815 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:44 UTC │
	│ node    │ list -p multinode-530815                                                                                                                                  │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ start   │ -p multinode-530815-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-530815-m02 │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ start   │ -p multinode-530815-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-530815-m03 │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │ 25 Oct 25 09:44 UTC │
	│ node    │ add -p multinode-530815                                                                                                                                   │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │                     │
	│ delete  │ -p multinode-530815-m03                                                                                                                                   │ multinode-530815-m03 │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │ 25 Oct 25 09:44 UTC │
	│ delete  │ -p multinode-530815                                                                                                                                       │ multinode-530815     │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │ 25 Oct 25 09:44 UTC │
	│ start   │ -p test-preload-367687 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-367687  │ jenkins │ v1.37.0 │ 25 Oct 25 09:44 UTC │ 25 Oct 25 09:46 UTC │
	│ image   │ test-preload-367687 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-367687  │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ stop    │ -p test-preload-367687                                                                                                                                    │ test-preload-367687  │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -p test-preload-367687 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-367687  │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:47 UTC │
	│ image   │ test-preload-367687 image list                                                                                                                            │ test-preload-367687  │ jenkins │ v1.37.0 │ 25 Oct 25 09:47 UTC │ 25 Oct 25 09:47 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:46:10
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:46:10.774664  130766 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:46:10.774988  130766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:10.774998  130766 out.go:374] Setting ErrFile to fd 2...
	I1025 09:46:10.775002  130766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:10.775182  130766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
	I1025 09:46:10.775603  130766 out.go:368] Setting JSON to false
	I1025 09:46:10.776563  130766 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5312,"bootTime":1761380259,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:46:10.776651  130766 start.go:141] virtualization: kvm guest
	I1025 09:46:10.779045  130766 out.go:179] * [test-preload-367687] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:46:10.780452  130766 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:46:10.780461  130766 notify.go:220] Checking for updates...
	I1025 09:46:10.783026  130766 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:46:10.784281  130766 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-103842/kubeconfig
	I1025 09:46:10.785488  130766 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-103842/.minikube
	I1025 09:46:10.786706  130766 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:46:10.787896  130766 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:46:10.789701  130766 config.go:182] Loaded profile config "test-preload-367687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1025 09:46:10.791570  130766 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1025 09:46:10.792759  130766 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:46:10.828104  130766 out.go:179] * Using the kvm2 driver based on existing profile
	I1025 09:46:10.829235  130766 start.go:305] selected driver: kvm2
	I1025 09:46:10.829255  130766 start.go:925] validating driver "kvm2" against &{Name:test-preload-367687 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-367687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:46:10.829394  130766 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:46:10.830824  130766 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:46:10.830860  130766 cni.go:84] Creating CNI manager for ""
	I1025 09:46:10.830928  130766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 09:46:10.831023  130766 start.go:349] cluster config:
	{Name:test-preload-367687 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-367687 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:46:10.831151  130766 iso.go:125] acquiring lock: {Name:mk13c1ce3bc6ed883268d1bbc558e3c5c7b2ab77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:46:10.832812  130766 out.go:179] * Starting "test-preload-367687" primary control-plane node in "test-preload-367687" cluster
	I1025 09:46:10.834161  130766 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1025 09:46:11.750263  130766 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1025 09:46:11.750318  130766 cache.go:58] Caching tarball of preloaded images
	I1025 09:46:11.750505  130766 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1025 09:46:11.752753  130766 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1025 09:46:11.754169  130766 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1025 09:46:11.855720  130766 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1025 09:46:11.855789  130766 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21794-103842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1025 09:46:21.410739  130766 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1025 09:46:21.410904  130766 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/config.json ...
	I1025 09:46:21.411158  130766 start.go:360] acquireMachinesLock for test-preload-367687: {Name:mkd4d80b8550b82ada790fb29b73ec76f8d8646f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 09:46:21.411232  130766 start.go:364] duration metric: took 48.818µs to acquireMachinesLock for "test-preload-367687"
	I1025 09:46:21.411250  130766 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:46:21.411255  130766 fix.go:54] fixHost starting: 
	I1025 09:46:21.413283  130766 fix.go:112] recreateIfNeeded on test-preload-367687: state=Stopped err=<nil>
	W1025 09:46:21.413311  130766 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 09:46:21.415116  130766 out.go:252] * Restarting existing kvm2 VM for "test-preload-367687" ...
	I1025 09:46:21.415168  130766 main.go:141] libmachine: starting domain...
	I1025 09:46:21.415178  130766 main.go:141] libmachine: ensuring networks are active...
	I1025 09:46:21.416417  130766 main.go:141] libmachine: Ensuring network default is active
	I1025 09:46:21.416855  130766 main.go:141] libmachine: Ensuring network mk-test-preload-367687 is active
	I1025 09:46:21.417337  130766 main.go:141] libmachine: getting domain XML...
	I1025 09:46:21.418370  130766 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-367687</name>
	  <uuid>1b593800-677e-4e0c-9843-53a8ad83df72</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21794-103842/.minikube/machines/test-preload-367687/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21794-103842/.minikube/machines/test-preload-367687/test-preload-367687.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f4:38:ac'/>
	      <source network='mk-test-preload-367687'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:5e:f5:1e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1025 09:46:22.698584  130766 main.go:141] libmachine: waiting for domain to start...
	I1025 09:46:22.700375  130766 main.go:141] libmachine: domain is now running
	I1025 09:46:22.700409  130766 main.go:141] libmachine: waiting for IP...
	I1025 09:46:22.701214  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:22.701870  130766 main.go:141] libmachine: domain test-preload-367687 has current primary IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:22.701888  130766 main.go:141] libmachine: found domain IP: 192.168.39.196
	I1025 09:46:22.701894  130766 main.go:141] libmachine: reserving static IP address...
	I1025 09:46:22.702354  130766 main.go:141] libmachine: found host DHCP lease matching {name: "test-preload-367687", mac: "52:54:00:f4:38:ac", ip: "192.168.39.196"} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:45:07 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:22.702383  130766 main.go:141] libmachine: skip adding static IP to network mk-test-preload-367687 - found existing host DHCP lease matching {name: "test-preload-367687", mac: "52:54:00:f4:38:ac", ip: "192.168.39.196"}
	I1025 09:46:22.702394  130766 main.go:141] libmachine: reserved static IP address 192.168.39.196 for domain test-preload-367687
	I1025 09:46:22.702400  130766 main.go:141] libmachine: waiting for SSH...
	I1025 09:46:22.702405  130766 main.go:141] libmachine: Getting to WaitForSSH function...
	I1025 09:46:22.704704  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:22.705039  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:45:07 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:22.705062  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:22.705216  130766 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:22.705498  130766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1025 09:46:22.705516  130766 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1025 09:46:25.783010  130766 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I1025 09:46:31.863208  130766 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I1025 09:46:34.989386  130766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:46:34.993506  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:34.994050  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:34.994086  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:34.994400  130766 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/config.json ...
	I1025 09:46:34.994621  130766 machine.go:93] provisionDockerMachine start ...
	I1025 09:46:34.997420  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:34.997929  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:34.997958  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:34.998157  130766 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:34.998417  130766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1025 09:46:34.998431  130766 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:46:35.113290  130766 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 09:46:35.113329  130766 buildroot.go:166] provisioning hostname "test-preload-367687"
	I1025 09:46:35.116077  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:35.116464  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:35.116490  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:35.116726  130766 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:35.116962  130766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1025 09:46:35.116980  130766 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-367687 && echo "test-preload-367687" | sudo tee /etc/hostname
	I1025 09:46:35.248073  130766 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-367687
	
	I1025 09:46:35.251387  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:35.251835  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:35.251862  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:35.252079  130766 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:35.252312  130766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1025 09:46:35.252330  130766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-367687' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-367687/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-367687' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:46:35.375027  130766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:46:35.375057  130766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21794-103842/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-103842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-103842/.minikube}
	I1025 09:46:35.375077  130766 buildroot.go:174] setting up certificates
	I1025 09:46:35.375088  130766 provision.go:84] configureAuth start
	I1025 09:46:35.378103  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:35.378587  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:35.378614  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:35.381414  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:35.381862  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:35.381903  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:35.382069  130766 provision.go:143] copyHostCerts
	I1025 09:46:35.382147  130766 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-103842/.minikube/ca.pem, removing ...
	I1025 09:46:35.382168  130766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-103842/.minikube/ca.pem
	I1025 09:46:35.382259  130766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-103842/.minikube/ca.pem (1082 bytes)
	I1025 09:46:35.382363  130766 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-103842/.minikube/cert.pem, removing ...
	I1025 09:46:35.382372  130766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-103842/.minikube/cert.pem
	I1025 09:46:35.382400  130766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-103842/.minikube/cert.pem (1123 bytes)
	I1025 09:46:35.382456  130766 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-103842/.minikube/key.pem, removing ...
	I1025 09:46:35.382463  130766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-103842/.minikube/key.pem
	I1025 09:46:35.382486  130766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-103842/.minikube/key.pem (1675 bytes)
	I1025 09:46:35.382537  130766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-103842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca-key.pem org=jenkins.test-preload-367687 san=[127.0.0.1 192.168.39.196 localhost minikube test-preload-367687]
	I1025 09:46:35.613640  130766 provision.go:177] copyRemoteCerts
	I1025 09:46:35.613704  130766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:46:35.616837  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:35.617338  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:35.617365  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:35.617521  130766 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/test-preload-367687/id_rsa Username:docker}
	I1025 09:46:35.705611  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:46:35.736547  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 09:46:35.765433  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:46:35.796419  130766 provision.go:87] duration metric: took 421.313728ms to configureAuth
	I1025 09:46:35.796450  130766 buildroot.go:189] setting minikube options for container-runtime
	I1025 09:46:35.796621  130766 config.go:182] Loaded profile config "test-preload-367687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1025 09:46:35.799888  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:35.800379  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:35.800407  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:35.800690  130766 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:35.800960  130766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1025 09:46:35.800979  130766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:46:36.083747  130766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:46:36.083792  130766 machine.go:96] duration metric: took 1.089157644s to provisionDockerMachine
	I1025 09:46:36.083803  130766 start.go:293] postStartSetup for "test-preload-367687" (driver="kvm2")
	I1025 09:46:36.083814  130766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:46:36.083873  130766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:46:36.086501  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:36.086966  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:36.086993  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:36.087185  130766 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/test-preload-367687/id_rsa Username:docker}
	I1025 09:46:36.178715  130766 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:46:36.183262  130766 info.go:137] Remote host: Buildroot 2025.02
	I1025 09:46:36.183295  130766 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-103842/.minikube/addons for local assets ...
	I1025 09:46:36.183371  130766 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-103842/.minikube/files for local assets ...
	I1025 09:46:36.183470  130766 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-103842/.minikube/files/etc/ssl/certs/1077662.pem -> 1077662.pem in /etc/ssl/certs
	I1025 09:46:36.183566  130766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:46:36.195126  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/files/etc/ssl/certs/1077662.pem --> /etc/ssl/certs/1077662.pem (1708 bytes)
	I1025 09:46:36.223882  130766 start.go:296] duration metric: took 140.061929ms for postStartSetup
	I1025 09:46:36.223924  130766 fix.go:56] duration metric: took 14.812669382s for fixHost
	I1025 09:46:36.227189  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:36.227763  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:36.227801  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:36.228016  130766 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:36.228253  130766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1025 09:46:36.228269  130766 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 09:46:36.354473  130766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761385596.315156974
	
	I1025 09:46:36.354498  130766 fix.go:216] guest clock: 1761385596.315156974
	I1025 09:46:36.354506  130766 fix.go:229] Guest: 2025-10-25 09:46:36.315156974 +0000 UTC Remote: 2025-10-25 09:46:36.223928605 +0000 UTC m=+25.498549657 (delta=91.228369ms)
	I1025 09:46:36.354522  130766 fix.go:200] guest clock delta is within tolerance: 91.228369ms
	I1025 09:46:36.354527  130766 start.go:83] releasing machines lock for "test-preload-367687", held for 14.9432843s
	I1025 09:46:36.357459  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:36.357962  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:36.357991  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:36.358590  130766 ssh_runner.go:195] Run: cat /version.json
	I1025 09:46:36.358716  130766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:46:36.361900  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:36.362238  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:36.362262  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:36.362299  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:36.362487  130766 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/test-preload-367687/id_rsa Username:docker}
	I1025 09:46:36.362868  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:36.362902  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:36.363093  130766 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/test-preload-367687/id_rsa Username:docker}
	I1025 09:46:36.444339  130766 ssh_runner.go:195] Run: systemctl --version
	I1025 09:46:36.480417  130766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:46:36.624832  130766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:46:36.632008  130766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:46:36.632087  130766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:46:36.653114  130766 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:46:36.653140  130766 start.go:495] detecting cgroup driver to use...
	I1025 09:46:36.653219  130766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:46:36.674176  130766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:46:36.691896  130766 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:46:36.691974  130766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:46:36.709846  130766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:46:36.726604  130766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:46:36.878924  130766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:46:37.098970  130766 docker.go:234] disabling docker service ...
	I1025 09:46:37.099075  130766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:46:37.116900  130766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:46:37.132128  130766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:46:37.286800  130766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:46:37.435321  130766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:46:37.451552  130766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:46:37.474491  130766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1025 09:46:37.474564  130766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:37.486933  130766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:46:37.487020  130766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:37.499659  130766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:37.512555  130766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:37.526196  130766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:46:37.539816  130766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:37.552444  130766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:37.573534  130766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:37.586338  130766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:46:37.597513  130766 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 09:46:37.597592  130766 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 09:46:37.618067  130766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:46:37.630140  130766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:37.783046  130766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:46:37.895704  130766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:46:37.895830  130766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:46:37.901210  130766 start.go:563] Will wait 60s for crictl version
	I1025 09:46:37.901286  130766 ssh_runner.go:195] Run: which crictl
	I1025 09:46:37.905532  130766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 09:46:37.950627  130766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 09:46:37.950735  130766 ssh_runner.go:195] Run: crio --version
	I1025 09:46:37.980939  130766 ssh_runner.go:195] Run: crio --version
	I1025 09:46:38.013203  130766 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1025 09:46:38.017287  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:38.017849  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:38.017879  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:38.018075  130766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1025 09:46:38.023547  130766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:46:38.040646  130766 kubeadm.go:883] updating cluster {Name:test-preload-367687 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-367687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:46:38.040786  130766 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1025 09:46:38.040831  130766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:46:38.083676  130766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1025 09:46:38.083753  130766 ssh_runner.go:195] Run: which lz4
	I1025 09:46:38.088342  130766 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 09:46:38.092936  130766 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 09:46:38.092972  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1025 09:46:39.558460  130766 crio.go:462] duration metric: took 1.470152248s to copy over tarball
	I1025 09:46:39.558540  130766 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 09:46:41.406026  130766 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.847398995s)
	I1025 09:46:41.406070  130766 crio.go:469] duration metric: took 1.84757802s to extract the tarball
	I1025 09:46:41.406081  130766 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 09:46:41.446593  130766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:46:41.491308  130766 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:46:41.491339  130766 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:46:41.491348  130766 kubeadm.go:934] updating node { 192.168.39.196 8443 v1.32.0 crio true true} ...
	I1025 09:46:41.491493  130766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-367687 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-367687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:46:41.491559  130766 ssh_runner.go:195] Run: crio config
	I1025 09:46:41.540347  130766 cni.go:84] Creating CNI manager for ""
	I1025 09:46:41.540369  130766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 09:46:41.540388  130766 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:46:41.540414  130766 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-367687 NodeName:test-preload-367687 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:46:41.540571  130766 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-367687"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.196"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:46:41.540655  130766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1025 09:46:41.553217  130766 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:46:41.553294  130766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:46:41.565462  130766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1025 09:46:41.585294  130766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:46:41.605519  130766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1025 09:46:41.625942  130766 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I1025 09:46:41.630291  130766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:46:41.645515  130766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:41.794940  130766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:46:41.825288  130766 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687 for IP: 192.168.39.196
	I1025 09:46:41.825311  130766 certs.go:195] generating shared ca certs ...
	I1025 09:46:41.825329  130766 certs.go:227] acquiring lock for ca certs: {Name:mk3c196d72f190531a27a5874f74b0341375ed0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:41.825518  130766 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-103842/.minikube/ca.key
	I1025 09:46:41.825567  130766 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-103842/.minikube/proxy-client-ca.key
	I1025 09:46:41.825574  130766 certs.go:257] generating profile certs ...
	I1025 09:46:41.825658  130766 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/client.key
	I1025 09:46:41.825738  130766 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/apiserver.key.a3658d4b
	I1025 09:46:41.825854  130766 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/proxy-client.key
	I1025 09:46:41.825986  130766 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/107766.pem (1338 bytes)
	W1025 09:46:41.826023  130766 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-103842/.minikube/certs/107766_empty.pem, impossibly tiny 0 bytes
	I1025 09:46:41.826032  130766 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:46:41.826056  130766 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:46:41.826081  130766 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:46:41.826169  130766 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-103842/.minikube/certs/key.pem (1675 bytes)
	I1025 09:46:41.826226  130766 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-103842/.minikube/files/etc/ssl/certs/1077662.pem (1708 bytes)
	I1025 09:46:41.826807  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:46:41.866619  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 09:46:41.902753  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:46:41.932231  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:46:41.961185  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 09:46:41.990595  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:46:42.019845  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:46:42.048838  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:46:42.077725  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/certs/107766.pem --> /usr/share/ca-certificates/107766.pem (1338 bytes)
	I1025 09:46:42.106385  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/files/etc/ssl/certs/1077662.pem --> /usr/share/ca-certificates/1077662.pem (1708 bytes)
	I1025 09:46:42.135380  130766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-103842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:46:42.164174  130766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:46:42.185086  130766 ssh_runner.go:195] Run: openssl version
	I1025 09:46:42.191623  130766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1077662.pem && ln -fs /usr/share/ca-certificates/1077662.pem /etc/ssl/certs/1077662.pem"
	I1025 09:46:42.205188  130766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1077662.pem
	I1025 09:46:42.210977  130766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:04 /usr/share/ca-certificates/1077662.pem
	I1025 09:46:42.211049  130766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1077662.pem
	I1025 09:46:42.218547  130766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1077662.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:46:42.232176  130766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:46:42.245840  130766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:42.251164  130766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:42.251226  130766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:42.258584  130766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:46:42.271415  130766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107766.pem && ln -fs /usr/share/ca-certificates/107766.pem /etc/ssl/certs/107766.pem"
	I1025 09:46:42.285001  130766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107766.pem
	I1025 09:46:42.290029  130766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:04 /usr/share/ca-certificates/107766.pem
	I1025 09:46:42.290092  130766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107766.pem
	I1025 09:46:42.297367  130766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107766.pem /etc/ssl/certs/51391683.0"
	I1025 09:46:42.310647  130766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:46:42.315997  130766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:46:42.323714  130766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:46:42.331272  130766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:46:42.339211  130766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:46:42.347689  130766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:46:42.355481  130766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:46:42.363331  130766 kubeadm.go:400] StartCluster: {Name:test-preload-367687 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-367687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:46:42.363434  130766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:46:42.363496  130766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:46:42.407103  130766 cri.go:89] found id: ""
	I1025 09:46:42.407176  130766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:46:42.420033  130766 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:46:42.420051  130766 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:46:42.420093  130766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:46:42.432326  130766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:46:42.432886  130766 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-367687" does not appear in /home/jenkins/minikube-integration/21794-103842/kubeconfig
	I1025 09:46:42.432997  130766 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-103842/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-367687" cluster setting kubeconfig missing "test-preload-367687" context setting]
	I1025 09:46:42.433269  130766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-103842/kubeconfig: {Name:mk3d3f05e9f06ad659cee3399b3108e510d71411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:42.433764  130766 kapi.go:59] client config for test-preload-367687: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/client.key", CAFile:"/home/jenkins/minikube-integration/21794-103842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:46:42.434207  130766 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 09:46:42.434228  130766 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 09:46:42.434236  130766 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 09:46:42.434242  130766 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 09:46:42.434249  130766 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 09:46:42.434605  130766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:46:42.446162  130766 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.196
	I1025 09:46:42.446203  130766 kubeadm.go:1160] stopping kube-system containers ...
	I1025 09:46:42.446216  130766 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1025 09:46:42.446264  130766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:46:42.482805  130766 cri.go:89] found id: ""
	I1025 09:46:42.482883  130766 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 09:46:42.503233  130766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:46:42.515892  130766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:46:42.515912  130766 kubeadm.go:157] found existing configuration files:
	
	I1025 09:46:42.515974  130766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:46:42.526581  130766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:46:42.526664  130766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:46:42.538364  130766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:46:42.549201  130766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:46:42.549277  130766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:46:42.561277  130766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:46:42.572236  130766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:46:42.572302  130766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:46:42.584079  130766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:46:42.595220  130766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:46:42.595294  130766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:46:42.607531  130766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:46:42.619942  130766 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:46:42.681076  130766 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:46:43.614973  130766 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:46:43.864696  130766 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:46:43.936926  130766 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:46:44.029655  130766 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:46:44.029755  130766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:46:44.530517  130766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:46:45.030718  130766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:46:45.530282  130766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:46:46.030128  130766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:46:46.529881  130766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:46:46.564887  130766 api_server.go:72] duration metric: took 2.535250108s to wait for apiserver process to appear ...
	I1025 09:46:46.564918  130766 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:46:46.564941  130766 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I1025 09:46:49.481491  130766 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 09:46:49.481531  130766 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 09:46:49.481552  130766 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I1025 09:46:49.494353  130766 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 09:46:49.494387  130766 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 09:46:49.565749  130766 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I1025 09:46:49.588456  130766 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:46:49.588488  130766 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:46:50.065162  130766 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I1025 09:46:50.070497  130766 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:46:50.070534  130766 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:46:50.565215  130766 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I1025 09:46:50.572350  130766 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:46:50.572387  130766 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:46:51.065309  130766 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I1025 09:46:51.070096  130766 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I1025 09:46:51.076637  130766 api_server.go:141] control plane version: v1.32.0
	I1025 09:46:51.076664  130766 api_server.go:131] duration metric: took 4.511738501s to wait for apiserver health ...
	I1025 09:46:51.076674  130766 cni.go:84] Creating CNI manager for ""
	I1025 09:46:51.076681  130766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 09:46:51.078473  130766 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 09:46:51.080117  130766 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 09:46:51.093254  130766 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 09:46:51.136507  130766 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:46:51.143289  130766 system_pods.go:59] 7 kube-system pods found
	I1025 09:46:51.143336  130766 system_pods.go:61] "coredns-668d6bf9bc-klxnk" [9667a197-1299-4af3-9a5f-8249a20e725e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:46:51.143346  130766 system_pods.go:61] "etcd-test-preload-367687" [7385d561-0449-4e46-af61-d6c3beb1dfe9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:46:51.143354  130766 system_pods.go:61] "kube-apiserver-test-preload-367687" [661ef7a1-2c02-4a12-b434-cf4befc186e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:46:51.143360  130766 system_pods.go:61] "kube-controller-manager-test-preload-367687" [492c8cc0-7904-4f01-a164-334e05b5e591] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:46:51.143366  130766 system_pods.go:61] "kube-proxy-86gq9" [261e46e0-22a9-4f7a-bada-adbe7abfa1ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:46:51.143373  130766 system_pods.go:61] "kube-scheduler-test-preload-367687" [b315555f-0d41-4de3-bc2f-7eba2eb86f4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:46:51.143380  130766 system_pods.go:61] "storage-provisioner" [47f478ca-4842-463d-bc6a-7f6ac88c24d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:46:51.143388  130766 system_pods.go:74] duration metric: took 6.85713ms to wait for pod list to return data ...
	I1025 09:46:51.143405  130766 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:46:51.155572  130766 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 09:46:51.155618  130766 node_conditions.go:123] node cpu capacity is 2
	I1025 09:46:51.155638  130766 node_conditions.go:105] duration metric: took 12.221889ms to run NodePressure ...
	I1025 09:46:51.155725  130766 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:46:51.444564  130766 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1025 09:46:51.448549  130766 kubeadm.go:743] kubelet initialised
	I1025 09:46:51.448571  130766 kubeadm.go:744] duration metric: took 3.977143ms waiting for restarted kubelet to initialise ...
	I1025 09:46:51.448589  130766 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:46:51.464391  130766 ops.go:34] apiserver oom_adj: -16
	I1025 09:46:51.464418  130766 kubeadm.go:601] duration metric: took 9.044361219s to restartPrimaryControlPlane
	I1025 09:46:51.464428  130766 kubeadm.go:402] duration metric: took 9.101108486s to StartCluster
	I1025 09:46:51.464446  130766 settings.go:142] acquiring lock: {Name:mk3fbb1aeefa7e4423e1917520f38525e6bd947f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:51.464537  130766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-103842/kubeconfig
	I1025 09:46:51.465153  130766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-103842/kubeconfig: {Name:mk3d3f05e9f06ad659cee3399b3108e510d71411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:51.465431  130766 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:46:51.465611  130766 config.go:182] Loaded profile config "test-preload-367687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1025 09:46:51.465572  130766 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:46:51.465678  130766 addons.go:69] Setting storage-provisioner=true in profile "test-preload-367687"
	I1025 09:46:51.465686  130766 addons.go:69] Setting default-storageclass=true in profile "test-preload-367687"
	I1025 09:46:51.465703  130766 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-367687"
	I1025 09:46:51.465713  130766 addons.go:238] Setting addon storage-provisioner=true in "test-preload-367687"
	W1025 09:46:51.465731  130766 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:46:51.465779  130766 host.go:66] Checking if "test-preload-367687" exists ...
	I1025 09:46:51.467356  130766 out.go:179] * Verifying Kubernetes components...
	I1025 09:46:51.468149  130766 kapi.go:59] client config for test-preload-367687: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/client.key", CAFile:"/home/jenkins/minikube-integration/21794-103842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:46:51.468385  130766 addons.go:238] Setting addon default-storageclass=true in "test-preload-367687"
	W1025 09:46:51.468399  130766 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:46:51.468416  130766 host.go:66] Checking if "test-preload-367687" exists ...
	I1025 09:46:51.469109  130766 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:46:51.469137  130766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:51.470504  130766 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:46:51.470528  130766 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:46:51.470857  130766 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:46:51.470880  130766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:46:51.473338  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:51.473765  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:51.473806  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:51.473851  130766 main.go:141] libmachine: domain test-preload-367687 has defined MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:51.474009  130766 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/test-preload-367687/id_rsa Username:docker}
	I1025 09:46:51.474358  130766 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:38:ac", ip: ""} in network mk-test-preload-367687: {Iface:virbr1 ExpiryTime:2025-10-25 10:46:32 +0000 UTC Type:0 Mac:52:54:00:f4:38:ac Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:test-preload-367687 Clientid:01:52:54:00:f4:38:ac}
	I1025 09:46:51.474388  130766 main.go:141] libmachine: domain test-preload-367687 has defined IP address 192.168.39.196 and MAC address 52:54:00:f4:38:ac in network mk-test-preload-367687
	I1025 09:46:51.474545  130766 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/test-preload-367687/id_rsa Username:docker}
	I1025 09:46:51.722353  130766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:46:51.743544  130766 node_ready.go:35] waiting up to 6m0s for node "test-preload-367687" to be "Ready" ...
	I1025 09:46:51.746673  130766 node_ready.go:49] node "test-preload-367687" is "Ready"
	I1025 09:46:51.746710  130766 node_ready.go:38] duration metric: took 3.12164ms for node "test-preload-367687" to be "Ready" ...
	I1025 09:46:51.746723  130766 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:46:51.746796  130766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:46:51.768302  130766 api_server.go:72] duration metric: took 302.826548ms to wait for apiserver process to appear ...
	I1025 09:46:51.768333  130766 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:46:51.768354  130766 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I1025 09:46:51.774275  130766 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I1025 09:46:51.775306  130766 api_server.go:141] control plane version: v1.32.0
	I1025 09:46:51.775334  130766 api_server.go:131] duration metric: took 6.992899ms to wait for apiserver health ...
	I1025 09:46:51.775345  130766 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:46:51.779969  130766 system_pods.go:59] 7 kube-system pods found
	I1025 09:46:51.779997  130766 system_pods.go:61] "coredns-668d6bf9bc-klxnk" [9667a197-1299-4af3-9a5f-8249a20e725e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:46:51.780005  130766 system_pods.go:61] "etcd-test-preload-367687" [7385d561-0449-4e46-af61-d6c3beb1dfe9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:46:51.780014  130766 system_pods.go:61] "kube-apiserver-test-preload-367687" [661ef7a1-2c02-4a12-b434-cf4befc186e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:46:51.780020  130766 system_pods.go:61] "kube-controller-manager-test-preload-367687" [492c8cc0-7904-4f01-a164-334e05b5e591] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:46:51.780024  130766 system_pods.go:61] "kube-proxy-86gq9" [261e46e0-22a9-4f7a-bada-adbe7abfa1ca] Running
	I1025 09:46:51.780030  130766 system_pods.go:61] "kube-scheduler-test-preload-367687" [b315555f-0d41-4de3-bc2f-7eba2eb86f4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:46:51.780036  130766 system_pods.go:61] "storage-provisioner" [47f478ca-4842-463d-bc6a-7f6ac88c24d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:46:51.780044  130766 system_pods.go:74] duration metric: took 4.693352ms to wait for pod list to return data ...
	I1025 09:46:51.780053  130766 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:46:51.783017  130766 default_sa.go:45] found service account: "default"
	I1025 09:46:51.783052  130766 default_sa.go:55] duration metric: took 2.991697ms for default service account to be created ...
	I1025 09:46:51.783065  130766 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:46:51.787403  130766 system_pods.go:86] 7 kube-system pods found
	I1025 09:46:51.787431  130766 system_pods.go:89] "coredns-668d6bf9bc-klxnk" [9667a197-1299-4af3-9a5f-8249a20e725e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:46:51.787441  130766 system_pods.go:89] "etcd-test-preload-367687" [7385d561-0449-4e46-af61-d6c3beb1dfe9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:46:51.787448  130766 system_pods.go:89] "kube-apiserver-test-preload-367687" [661ef7a1-2c02-4a12-b434-cf4befc186e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:46:51.787469  130766 system_pods.go:89] "kube-controller-manager-test-preload-367687" [492c8cc0-7904-4f01-a164-334e05b5e591] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:46:51.787475  130766 system_pods.go:89] "kube-proxy-86gq9" [261e46e0-22a9-4f7a-bada-adbe7abfa1ca] Running
	I1025 09:46:51.787482  130766 system_pods.go:89] "kube-scheduler-test-preload-367687" [b315555f-0d41-4de3-bc2f-7eba2eb86f4c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:46:51.787487  130766 system_pods.go:89] "storage-provisioner" [47f478ca-4842-463d-bc6a-7f6ac88c24d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:46:51.787495  130766 system_pods.go:126] duration metric: took 4.424357ms to wait for k8s-apps to be running ...
	I1025 09:46:51.787504  130766 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:46:51.787551  130766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:46:51.807422  130766 system_svc.go:56] duration metric: took 19.907179ms WaitForService to wait for kubelet
	I1025 09:46:51.807479  130766 kubeadm.go:586] duration metric: took 341.990061ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:46:51.807503  130766 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:46:51.810202  130766 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 09:46:51.810230  130766 node_conditions.go:123] node cpu capacity is 2
	I1025 09:46:51.810242  130766 node_conditions.go:105] duration metric: took 2.732723ms to run NodePressure ...
	I1025 09:46:51.810256  130766 start.go:241] waiting for startup goroutines ...
	I1025 09:46:51.846352  130766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:46:51.873125  130766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:46:52.549595  130766 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1025 09:46:52.551172  130766 addons.go:514] duration metric: took 1.08559386s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1025 09:46:52.551228  130766 start.go:246] waiting for cluster config update ...
	I1025 09:46:52.551246  130766 start.go:255] writing updated cluster config ...
	I1025 09:46:52.551597  130766 ssh_runner.go:195] Run: rm -f paused
	I1025 09:46:52.562427  130766 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:46:52.563162  130766 kapi.go:59] client config for test-preload-367687: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-103842/.minikube/profiles/test-preload-367687/client.key", CAFile:"/home/jenkins/minikube-integration/21794-103842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:46:52.591199  130766 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-klxnk" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:46:54.597906  130766 pod_ready.go:104] pod "coredns-668d6bf9bc-klxnk" is not "Ready", error: <nil>
	W1025 09:46:57.097867  130766 pod_ready.go:104] pod "coredns-668d6bf9bc-klxnk" is not "Ready", error: <nil>
	W1025 09:46:59.098255  130766 pod_ready.go:104] pod "coredns-668d6bf9bc-klxnk" is not "Ready", error: <nil>
	W1025 09:47:01.098468  130766 pod_ready.go:104] pod "coredns-668d6bf9bc-klxnk" is not "Ready", error: <nil>
	W1025 09:47:03.098934  130766 pod_ready.go:104] pod "coredns-668d6bf9bc-klxnk" is not "Ready", error: <nil>
	I1025 09:47:03.597074  130766 pod_ready.go:94] pod "coredns-668d6bf9bc-klxnk" is "Ready"
	I1025 09:47:03.597116  130766 pod_ready.go:86] duration metric: took 11.005869334s for pod "coredns-668d6bf9bc-klxnk" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:47:03.600135  130766 pod_ready.go:83] waiting for pod "etcd-test-preload-367687" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:47:03.605613  130766 pod_ready.go:94] pod "etcd-test-preload-367687" is "Ready"
	I1025 09:47:03.605642  130766 pod_ready.go:86] duration metric: took 5.476584ms for pod "etcd-test-preload-367687" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:47:03.608012  130766 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-367687" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:47:03.612525  130766 pod_ready.go:94] pod "kube-apiserver-test-preload-367687" is "Ready"
	I1025 09:47:03.612555  130766 pod_ready.go:86] duration metric: took 4.498901ms for pod "kube-apiserver-test-preload-367687" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:47:03.614978  130766 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-367687" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:47:05.622716  130766 pod_ready.go:104] pod "kube-controller-manager-test-preload-367687" is not "Ready", error: <nil>
	I1025 09:47:06.120540  130766 pod_ready.go:94] pod "kube-controller-manager-test-preload-367687" is "Ready"
	I1025 09:47:06.120576  130766 pod_ready.go:86] duration metric: took 2.505567557s for pod "kube-controller-manager-test-preload-367687" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:47:06.123017  130766 pod_ready.go:83] waiting for pod "kube-proxy-86gq9" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:47:06.394725  130766 pod_ready.go:94] pod "kube-proxy-86gq9" is "Ready"
	I1025 09:47:06.394765  130766 pod_ready.go:86] duration metric: took 271.71715ms for pod "kube-proxy-86gq9" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:47:06.596000  130766 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-367687" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:47:06.995183  130766 pod_ready.go:94] pod "kube-scheduler-test-preload-367687" is "Ready"
	I1025 09:47:06.995210  130766 pod_ready.go:86] duration metric: took 399.182004ms for pod "kube-scheduler-test-preload-367687" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:47:06.995222  130766 pod_ready.go:40] duration metric: took 14.432748784s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:47:07.037524  130766 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1025 09:47:07.038983  130766 out.go:203] 
	W1025 09:47:07.040209  130766 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1025 09:47:07.041434  130766 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1025 09:47:07.042750  130766 out.go:179] * Done! kubectl is now configured to use "test-preload-367687" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.909609789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0e23313-fd99-432c-9e6e-72a123da1d88 name=/runtime.v1.RuntimeService/Version
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.910861979Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=403939e8-8ac0-463e-b76c-61bdbc6fe77c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.911894643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761385627911826910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=403939e8-8ac0-463e-b76c-61bdbc6fe77c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.912857661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b129bbdd-e646-4445-bdbc-7bf25b4b9094 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.912963956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b129bbdd-e646-4445-bdbc-7bf25b4b9094 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.913862624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd410bb68103eb884a7657e544a700cc9b2b0fffc6491eb65bd2264242fc9b6d,PodSandboxId:1eb8676a32dbe2d5f04d8b6a3e695884a222fddf71905450d712d7fb47146adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761385614035926902,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-klxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9667a197-1299-4af3-9a5f-8249a20e725e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3930984a07e76d24d8e206ac21e43c1eaf8ab0fb4cd5deb6195bdd02ca9942e3,PodSandboxId:be8c3633c1cf35591c3ac64af95d2811b1cfcae2112f09f762449f6d00f8e10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761385611105792677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 47f478ca-4842-463d-bc6a-7f6ac88c24d5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88296778b7dd02e344227e95b77d27a63f54e3a39f37927ab32b8ebae836c4ba,PodSandboxId:8b50b298e672a67e22e0b21780ec206e3e1bd21f4cea702880524457d9dcb724,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761385610381232080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-86gq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26
1e46e0-22a9-4f7a-bada-adbe7abfa1ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e901d81506817f3290b29ec6b0512bc4de357f024c02fe5af4dbf432e037352,PodSandboxId:be8c3633c1cf35591c3ac64af95d2811b1cfcae2112f09f762449f6d00f8e10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761385610395003667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47f478ca-4842-4
63d-bc6a-7f6ac88c24d5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a1afbcf9fc26766860e5a3c340f4984b01bec2ebc8195542c86d273ba82a3da,PodSandboxId:f0095197f588dc9d8a29b5492607876f71679168d3e0b4cfc25bb7ef69093e8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761385606174666272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 15909708f8dde2a82c670fe373546ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb4f6cfa3c84d532cfd8f86585a15560164175ed09fe075d4419caa3ec35d12,PodSandboxId:28d826f7849a4171ac715e17c455eda7c73bae64f1a1f86a42bd84cdd95b8c79,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761385606192441903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfe61
3c8260c673954c0661adffe763,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114bf6a2c4a92baf3b10121a51d4b035d76e17971651f30132200e6b10503425,PodSandboxId:df1ce00474b869bef262d7403492cfa3e1d76cff939fe4f7547525ac98c5cd9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761385606160651475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b6b0aa7610f6d1e9f44709b3bf50f3,},Annotations:
map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e64c6fe61dc8ade5b97275b91901cb372926c153c956418409dbdfe753a3bd0,PodSandboxId:c5534923510d4948ed5aa33113004f40f43575b1dbd2911ff39fdd11f57513d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761385606122824158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc5a358cc46cc6c5d1183a1bc77e498,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b129bbdd-e646-4445-bdbc-7bf25b4b9094 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.955000783Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=528c3a7d-59a2-44e7-bc67-9810ec827e3b name=/runtime.v1.RuntimeService/Version
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.955436664Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=528c3a7d-59a2-44e7-bc67-9810ec827e3b name=/runtime.v1.RuntimeService/Version
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.956997332Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=784e2b33-5a8d-4d20-98f6-0a87e48dafd6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.957439659Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761385627957418092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=784e2b33-5a8d-4d20-98f6-0a87e48dafd6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.958032831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47458604-7538-466d-9c01-5732d2e4f730 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.958096590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47458604-7538-466d-9c01-5732d2e4f730 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.958287667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd410bb68103eb884a7657e544a700cc9b2b0fffc6491eb65bd2264242fc9b6d,PodSandboxId:1eb8676a32dbe2d5f04d8b6a3e695884a222fddf71905450d712d7fb47146adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761385614035926902,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-klxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9667a197-1299-4af3-9a5f-8249a20e725e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3930984a07e76d24d8e206ac21e43c1eaf8ab0fb4cd5deb6195bdd02ca9942e3,PodSandboxId:be8c3633c1cf35591c3ac64af95d2811b1cfcae2112f09f762449f6d00f8e10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761385611105792677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 47f478ca-4842-463d-bc6a-7f6ac88c24d5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88296778b7dd02e344227e95b77d27a63f54e3a39f37927ab32b8ebae836c4ba,PodSandboxId:8b50b298e672a67e22e0b21780ec206e3e1bd21f4cea702880524457d9dcb724,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761385610381232080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-86gq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26
1e46e0-22a9-4f7a-bada-adbe7abfa1ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e901d81506817f3290b29ec6b0512bc4de357f024c02fe5af4dbf432e037352,PodSandboxId:be8c3633c1cf35591c3ac64af95d2811b1cfcae2112f09f762449f6d00f8e10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761385610395003667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47f478ca-4842-4
63d-bc6a-7f6ac88c24d5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a1afbcf9fc26766860e5a3c340f4984b01bec2ebc8195542c86d273ba82a3da,PodSandboxId:f0095197f588dc9d8a29b5492607876f71679168d3e0b4cfc25bb7ef69093e8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761385606174666272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 15909708f8dde2a82c670fe373546ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb4f6cfa3c84d532cfd8f86585a15560164175ed09fe075d4419caa3ec35d12,PodSandboxId:28d826f7849a4171ac715e17c455eda7c73bae64f1a1f86a42bd84cdd95b8c79,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761385606192441903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfe61
3c8260c673954c0661adffe763,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114bf6a2c4a92baf3b10121a51d4b035d76e17971651f30132200e6b10503425,PodSandboxId:df1ce00474b869bef262d7403492cfa3e1d76cff939fe4f7547525ac98c5cd9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761385606160651475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b6b0aa7610f6d1e9f44709b3bf50f3,},Annotations:
map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e64c6fe61dc8ade5b97275b91901cb372926c153c956418409dbdfe753a3bd0,PodSandboxId:c5534923510d4948ed5aa33113004f40f43575b1dbd2911ff39fdd11f57513d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761385606122824158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc5a358cc46cc6c5d1183a1bc77e498,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47458604-7538-466d-9c01-5732d2e4f730 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.993197572Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cea087e8-da4f-46ee-acba-7c71bc9ca74d name=/runtime.v1.RuntimeService/Version
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.993289435Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cea087e8-da4f-46ee-acba-7c71bc9ca74d name=/runtime.v1.RuntimeService/Version
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.994536692Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bef03508-d9ef-4f8e-a442-0d56e8ed2dd2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.995738997Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761385627995712185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bef03508-d9ef-4f8e-a442-0d56e8ed2dd2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.996554426Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df51f7ef-f8b9-4ac7-b0da-33937e07c9bb name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.997045126Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2240d428-7efe-4cf4-b71d-8dddc4188a12 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.997184488Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2240d428-7efe-4cf4-b71d-8dddc4188a12 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.997020271Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1eb8676a32dbe2d5f04d8b6a3e695884a222fddf71905450d712d7fb47146adc,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-klxnk,Uid:9667a197-1299-4af3-9a5f-8249a20e725e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761385613804501196,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-klxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9667a197-1299-4af3-9a5f-8249a20e725e,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-25T09:46:49.952425238Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:be8c3633c1cf35591c3ac64af95d2811b1cfcae2112f09f762449f6d00f8e10f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:47f478ca-4842-463d-bc6a-7f6ac88c24d5,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761385610265192429,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47f478ca-4842-463d-bc6a-7f6ac88c24d5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-25T09:46:49.952423781Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8b50b298e672a67e22e0b21780ec206e3e1bd21f4cea702880524457d9dcb724,Metadata:&PodSandboxMetadata{Name:kube-proxy-86gq9,Uid:261e46e0-22a9-4f7a-bada-adbe7abfa1ca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761385610260086241,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-86gq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261e46e0-22a9-4f7a-bada-adbe7abfa1ca,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-25T09:46:49.952429726Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:df1ce00474b869bef262d7403492cfa3e1d76cff939fe4f7547525ac98c5cd9c,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-367687,Uid:75b6b0aa7610f6d1e
9f44709b3bf50f3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761385605934004500,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b6b0aa7610f6d1e9f44709b3bf50f3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.196:2379,kubernetes.io/config.hash: 75b6b0aa7610f6d1e9f44709b3bf50f3,kubernetes.io/config.seen: 2025-10-25T09:46:44.019954777Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:28d826f7849a4171ac715e17c455eda7c73bae64f1a1f86a42bd84cdd95b8c79,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-367687,Uid:ecfe613c8260c673954c0661adffe763,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761385605927117220,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-pr
eload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfe613c8260c673954c0661adffe763,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ecfe613c8260c673954c0661adffe763,kubernetes.io/config.seen: 2025-10-25T09:46:43.951266569Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f0095197f588dc9d8a29b5492607876f71679168d3e0b4cfc25bb7ef69093e8e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-367687,Uid:15909708f8dde2a82c670fe373546ce9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761385605913306284,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15909708f8dde2a82c670fe373546ce9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15909708f8dde2a82c670fe373546ce9,kubernetes.io/config.seen: 2025-10-25T09
:46:43.951262091Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c5534923510d4948ed5aa33113004f40f43575b1dbd2911ff39fdd11f57513d1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-367687,Uid:0fc5a358cc46cc6c5d1183a1bc77e498,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761385605902927928,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc5a358cc46cc6c5d1183a1bc77e498,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.196:8443,kubernetes.io/config.hash: 0fc5a358cc46cc6c5d1183a1bc77e498,kubernetes.io/config.seen: 2025-10-25T09:46:43.951257468Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=df51f7ef-f8b9-4ac7-b0da-33937e07c9bb name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.997784890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd410bb68103eb884a7657e544a700cc9b2b0fffc6491eb65bd2264242fc9b6d,PodSandboxId:1eb8676a32dbe2d5f04d8b6a3e695884a222fddf71905450d712d7fb47146adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761385614035926902,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-klxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9667a197-1299-4af3-9a5f-8249a20e725e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3930984a07e76d24d8e206ac21e43c1eaf8ab0fb4cd5deb6195bdd02ca9942e3,PodSandboxId:be8c3633c1cf35591c3ac64af95d2811b1cfcae2112f09f762449f6d00f8e10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761385611105792677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 47f478ca-4842-463d-bc6a-7f6ac88c24d5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88296778b7dd02e344227e95b77d27a63f54e3a39f37927ab32b8ebae836c4ba,PodSandboxId:8b50b298e672a67e22e0b21780ec206e3e1bd21f4cea702880524457d9dcb724,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761385610381232080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-86gq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26
1e46e0-22a9-4f7a-bada-adbe7abfa1ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e901d81506817f3290b29ec6b0512bc4de357f024c02fe5af4dbf432e037352,PodSandboxId:be8c3633c1cf35591c3ac64af95d2811b1cfcae2112f09f762449f6d00f8e10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761385610395003667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47f478ca-4842-4
63d-bc6a-7f6ac88c24d5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a1afbcf9fc26766860e5a3c340f4984b01bec2ebc8195542c86d273ba82a3da,PodSandboxId:f0095197f588dc9d8a29b5492607876f71679168d3e0b4cfc25bb7ef69093e8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761385606174666272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 15909708f8dde2a82c670fe373546ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb4f6cfa3c84d532cfd8f86585a15560164175ed09fe075d4419caa3ec35d12,PodSandboxId:28d826f7849a4171ac715e17c455eda7c73bae64f1a1f86a42bd84cdd95b8c79,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761385606192441903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfe61
3c8260c673954c0661adffe763,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114bf6a2c4a92baf3b10121a51d4b035d76e17971651f30132200e6b10503425,PodSandboxId:df1ce00474b869bef262d7403492cfa3e1d76cff939fe4f7547525ac98c5cd9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761385606160651475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b6b0aa7610f6d1e9f44709b3bf50f3,},Annotations:
map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e64c6fe61dc8ade5b97275b91901cb372926c153c956418409dbdfe753a3bd0,PodSandboxId:c5534923510d4948ed5aa33113004f40f43575b1dbd2911ff39fdd11f57513d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761385606122824158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc5a358cc46cc6c5d1183a1bc77e498,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2240d428-7efe-4cf4-b71d-8dddc4188a12 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.997854954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8b8a741-a910-43e1-97d0-569ffb15ac53 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.998230994Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8b8a741-a910-43e1-97d0-569ffb15ac53 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:47:07 test-preload-367687 crio[835]: time="2025-10-25 09:47:07.998373687Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd410bb68103eb884a7657e544a700cc9b2b0fffc6491eb65bd2264242fc9b6d,PodSandboxId:1eb8676a32dbe2d5f04d8b6a3e695884a222fddf71905450d712d7fb47146adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761385614035926902,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-klxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9667a197-1299-4af3-9a5f-8249a20e725e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3930984a07e76d24d8e206ac21e43c1eaf8ab0fb4cd5deb6195bdd02ca9942e3,PodSandboxId:be8c3633c1cf35591c3ac64af95d2811b1cfcae2112f09f762449f6d00f8e10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761385611105792677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 47f478ca-4842-463d-bc6a-7f6ac88c24d5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88296778b7dd02e344227e95b77d27a63f54e3a39f37927ab32b8ebae836c4ba,PodSandboxId:8b50b298e672a67e22e0b21780ec206e3e1bd21f4cea702880524457d9dcb724,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761385610381232080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-86gq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26
1e46e0-22a9-4f7a-bada-adbe7abfa1ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a1afbcf9fc26766860e5a3c340f4984b01bec2ebc8195542c86d273ba82a3da,PodSandboxId:f0095197f588dc9d8a29b5492607876f71679168d3e0b4cfc25bb7ef69093e8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761385606174666272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 15909708f8dde2a82c670fe373546ce9,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb4f6cfa3c84d532cfd8f86585a15560164175ed09fe075d4419caa3ec35d12,PodSandboxId:28d826f7849a4171ac715e17c455eda7c73bae64f1a1f86a42bd84cdd95b8c79,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761385606192441903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: ecfe613c8260c673954c0661adffe763,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114bf6a2c4a92baf3b10121a51d4b035d76e17971651f30132200e6b10503425,PodSandboxId:df1ce00474b869bef262d7403492cfa3e1d76cff939fe4f7547525ac98c5cd9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761385606160651475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b6b0aa7610f6d1e9f44709b3bf50f3,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e64c6fe61dc8ade5b97275b91901cb372926c153c956418409dbdfe753a3bd0,PodSandboxId:c5534923510d4948ed5aa33113004f40f43575b1dbd2911ff39fdd11f57513d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761385606122824158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-367687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc5a358cc46cc6c5d1183a1bc77e498,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e8b8a741-a910-43e1-97d0-569ffb15ac53 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd410bb68103e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 seconds ago      Running             coredns                   1                   1eb8676a32dbe       coredns-668d6bf9bc-klxnk
	3930984a07e76       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       3                   be8c3633c1cf3       storage-provisioner
	6e901d8150681       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago      Exited              storage-provisioner       2                   be8c3633c1cf3       storage-provisioner
	88296778b7dd0       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   17 seconds ago      Running             kube-proxy                1                   8b50b298e672a       kube-proxy-86gq9
	2cb4f6cfa3c84       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   21 seconds ago      Running             kube-scheduler            1                   28d826f7849a4       kube-scheduler-test-preload-367687
	1a1afbcf9fc26       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   21 seconds ago      Running             kube-controller-manager   1                   f0095197f588d       kube-controller-manager-test-preload-367687
	114bf6a2c4a92       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   21 seconds ago      Running             etcd                      1                   df1ce00474b86       etcd-test-preload-367687
	9e64c6fe61dc8       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   21 seconds ago      Running             kube-apiserver            1                   c5534923510d4       kube-apiserver-test-preload-367687
	
	
	==> coredns [fd410bb68103eb884a7657e544a700cc9b2b0fffc6491eb65bd2264242fc9b6d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33590 - 37956 "HINFO IN 4337683150127083718.1630714224737602860. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024461146s
	
	
	==> describe nodes <==
	Name:               test-preload-367687
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-367687
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=test-preload-367687
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_45_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:45:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-367687
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:46:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:46:51 +0000   Sat, 25 Oct 2025 09:45:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:46:51 +0000   Sat, 25 Oct 2025 09:45:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:46:51 +0000   Sat, 25 Oct 2025 09:45:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:46:51 +0000   Sat, 25 Oct 2025 09:46:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    test-preload-367687
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b593800677e4e0c984353a8ad83df72
	  System UUID:                1b593800-677e-4e0c-9843-53a8ad83df72
	  Boot ID:                    5b852ddf-466a-42e5-aebe-144f8ee5e997
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-klxnk                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     82s
	  kube-system                 etcd-test-preload-367687                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         86s
	  kube-system                 kube-apiserver-test-preload-367687             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-test-preload-367687    200m (10%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-proxy-86gq9                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-scheduler-test-preload-367687             100m (5%)     0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 79s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Normal   Starting                 87s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  86s                kubelet          Node test-preload-367687 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    86s                kubelet          Node test-preload-367687 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     86s                kubelet          Node test-preload-367687 status is now: NodeHasSufficientPID
	  Normal   NodeReady                86s                kubelet          Node test-preload-367687 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           83s                node-controller  Node test-preload-367687 event: Registered Node test-preload-367687 in Controller
	  Normal   Starting                 25s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node test-preload-367687 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node test-preload-367687 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node test-preload-367687 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 19s                kubelet          Node test-preload-367687 has been rebooted, boot id: 5b852ddf-466a-42e5-aebe-144f8ee5e997
	  Normal   RegisteredNode           16s                node-controller  Node test-preload-367687 event: Registered Node test-preload-367687 in Controller
	
	
	==> dmesg <==
	[Oct25 09:46] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000047] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003116] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.921280] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000005] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.121146] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.098687] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.474878] kauditd_printk_skb: 177 callbacks suppressed
	[Oct25 09:47] kauditd_printk_skb: 212 callbacks suppressed
	
	
	==> etcd [114bf6a2c4a92baf3b10121a51d4b035d76e17971651f30132200e6b10503425] <==
	{"level":"info","ts":"2025-10-25T09:46:46.528051Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-25T09:46:46.535736Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:46:46.536128Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:46:46.546200Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:46:46.546910Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2025-10-25T09:46:46.546940Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2025-10-25T09:46:46.546450Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T09:46:46.547530Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"a14f9258d3b66c75","initial-advertise-peer-urls":["https://192.168.39.196:2380"],"listen-peer-urls":["https://192.168.39.196:2380"],"advertise-client-urls":["https://192.168.39.196:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.196:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-25T09:46:46.548167Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T09:46:48.386541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-25T09:46:48.386576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-25T09:46:48.386605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 received MsgPreVoteResp from a14f9258d3b66c75 at term 2"}
	{"level":"info","ts":"2025-10-25T09:46:48.386616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became candidate at term 3"}
	{"level":"info","ts":"2025-10-25T09:46:48.386622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 received MsgVoteResp from a14f9258d3b66c75 at term 3"}
	{"level":"info","ts":"2025-10-25T09:46:48.386630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became leader at term 3"}
	{"level":"info","ts":"2025-10-25T09:46:48.386636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a14f9258d3b66c75 elected leader a14f9258d3b66c75 at term 3"}
	{"level":"info","ts":"2025-10-25T09:46:48.391232Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"a14f9258d3b66c75","local-member-attributes":"{Name:test-preload-367687 ClientURLs:[https://192.168.39.196:2379]}","request-path":"/0/members/a14f9258d3b66c75/attributes","cluster-id":"8309c60c27e527a4","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T09:46:48.391278Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:46:48.391637Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T09:46:48.391730Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-25T09:46:48.391716Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:46:48.392218Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-25T09:46:48.392600Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-25T09:46:48.392838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T09:46:48.393797Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.196:2379"}
	
	
	==> kernel <==
	 09:47:08 up 0 min,  0 users,  load average: 0.43, 0.12, 0.04
	Linux test-preload-367687 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [9e64c6fe61dc8ade5b97275b91901cb372926c153c956418409dbdfe753a3bd0] <==
	I1025 09:46:49.455233       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:46:49.455240       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:46:49.455246       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:46:49.518923       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1025 09:46:49.519067       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:46:49.519118       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:46:49.518963       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:46:49.519650       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:46:49.520817       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1025 09:46:49.531773       1 shared_informer.go:320] Caches are synced for configmaps
	I1025 09:46:49.532064       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:46:49.553845       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1025 09:46:49.554828       1 policy_source.go:240] refreshing policies
	I1025 09:46:49.563166       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 09:46:49.564176       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:46:49.604998       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:46:49.983599       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1025 09:46:50.439127       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:46:51.296014       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1025 09:46:51.331382       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1025 09:46:51.374841       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:46:51.382990       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:46:52.938844       1 controller.go:615] quota admission added evaluator for: endpoints
	I1025 09:46:53.043551       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1025 09:46:53.138092       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1a1afbcf9fc26766860e5a3c340f4984b01bec2ebc8195542c86d273ba82a3da] <==
	I1025 09:46:52.639273       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-367687"
	I1025 09:46:52.639412       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:46:52.643131       1 shared_informer.go:320] Caches are synced for attach detach
	I1025 09:46:52.644904       1 shared_informer.go:320] Caches are synced for resource quota
	I1025 09:46:52.655339       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1025 09:46:52.677019       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1025 09:46:52.679544       1 shared_informer.go:320] Caches are synced for node
	I1025 09:46:52.679595       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:46:52.679653       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:46:52.679660       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1025 09:46:52.679665       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1025 09:46:52.679731       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-367687"
	I1025 09:46:52.682125       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1025 09:46:52.682290       1 shared_informer.go:320] Caches are synced for PV protection
	I1025 09:46:52.685979       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1025 09:46:52.686105       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1025 09:46:52.735628       1 shared_informer.go:320] Caches are synced for garbage collector
	I1025 09:46:52.735650       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:46:52.735657       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:46:52.756162       1 shared_informer.go:320] Caches are synced for garbage collector
	I1025 09:46:53.057632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="446.268311ms"
	I1025 09:46:53.057735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.439µs"
	I1025 09:46:55.130197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="65.77µs"
	I1025 09:47:03.118773       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.437014ms"
	I1025 09:47:03.118949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="45.486µs"
	
	
	==> kube-proxy [88296778b7dd02e344227e95b77d27a63f54e3a39f37927ab32b8ebae836c4ba] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1025 09:46:50.764074       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1025 09:46:50.777530       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.196"]
	E1025 09:46:50.777684       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:46:50.812960       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1025 09:46:50.813024       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 09:46:50.813048       1 server_linux.go:170] "Using iptables Proxier"
	I1025 09:46:50.815639       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:46:50.815924       1 server.go:497] "Version info" version="v1.32.0"
	I1025 09:46:50.815954       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:46:50.817804       1 config.go:199] "Starting service config controller"
	I1025 09:46:50.817841       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1025 09:46:50.817867       1 config.go:105] "Starting endpoint slice config controller"
	I1025 09:46:50.817871       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1025 09:46:50.819592       1 config.go:329] "Starting node config controller"
	I1025 09:46:50.819620       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1025 09:46:50.918030       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1025 09:46:50.918049       1 shared_informer.go:320] Caches are synced for service config
	I1025 09:46:50.919687       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2cb4f6cfa3c84d532cfd8f86585a15560164175ed09fe075d4419caa3ec35d12] <==
	I1025 09:46:47.022064       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:46:49.477427       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:46:49.477920       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:46:49.478115       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:46:49.478214       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:46:49.516053       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1025 09:46:49.516165       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:46:49.520629       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1025 09:46:49.522351       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:46:49.524528       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 09:46:49.522366       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:46:49.625761       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 09:46:49 test-preload-367687 kubelet[1158]: I1025 09:46:49.609374    1158 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-367687"
	Oct 25 09:46:49 test-preload-367687 kubelet[1158]: E1025 09:46:49.623072    1158 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-367687\" already exists" pod="kube-system/kube-controller-manager-test-preload-367687"
	Oct 25 09:46:49 test-preload-367687 kubelet[1158]: I1025 09:46:49.623233    1158 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-367687"
	Oct 25 09:46:49 test-preload-367687 kubelet[1158]: E1025 09:46:49.633658    1158 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-367687\" already exists" pod="kube-system/kube-scheduler-test-preload-367687"
	Oct 25 09:46:49 test-preload-367687 kubelet[1158]: I1025 09:46:49.633702    1158 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-367687"
	Oct 25 09:46:49 test-preload-367687 kubelet[1158]: E1025 09:46:49.643726    1158 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-367687\" already exists" pod="kube-system/etcd-test-preload-367687"
	Oct 25 09:46:49 test-preload-367687 kubelet[1158]: I1025 09:46:49.949205    1158 apiserver.go:52] "Watching apiserver"
	Oct 25 09:46:49 test-preload-367687 kubelet[1158]: E1025 09:46:49.953306    1158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-klxnk" podUID="9667a197-1299-4af3-9a5f-8249a20e725e"
	Oct 25 09:46:49 test-preload-367687 kubelet[1158]: I1025 09:46:49.974711    1158 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 25 09:46:49 test-preload-367687 kubelet[1158]: I1025 09:46:49.977299    1158 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/47f478ca-4842-463d-bc6a-7f6ac88c24d5-tmp\") pod \"storage-provisioner\" (UID: \"47f478ca-4842-463d-bc6a-7f6ac88c24d5\") " pod="kube-system/storage-provisioner"
	Oct 25 09:46:49 test-preload-367687 kubelet[1158]: I1025 09:46:49.977335    1158 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/261e46e0-22a9-4f7a-bada-adbe7abfa1ca-lib-modules\") pod \"kube-proxy-86gq9\" (UID: \"261e46e0-22a9-4f7a-bada-adbe7abfa1ca\") " pod="kube-system/kube-proxy-86gq9"
	Oct 25 09:46:49 test-preload-367687 kubelet[1158]: I1025 09:46:49.977365    1158 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/261e46e0-22a9-4f7a-bada-adbe7abfa1ca-xtables-lock\") pod \"kube-proxy-86gq9\" (UID: \"261e46e0-22a9-4f7a-bada-adbe7abfa1ca\") " pod="kube-system/kube-proxy-86gq9"
	Oct 25 09:46:49 test-preload-367687 kubelet[1158]: E1025 09:46:49.977807    1158 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 09:46:49 test-preload-367687 kubelet[1158]: E1025 09:46:49.977875    1158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9667a197-1299-4af3-9a5f-8249a20e725e-config-volume podName:9667a197-1299-4af3-9a5f-8249a20e725e nodeName:}" failed. No retries permitted until 2025-10-25 09:46:50.477852482 +0000 UTC m=+6.638433548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9667a197-1299-4af3-9a5f-8249a20e725e-config-volume") pod "coredns-668d6bf9bc-klxnk" (UID: "9667a197-1299-4af3-9a5f-8249a20e725e") : object "kube-system"/"coredns" not registered
	Oct 25 09:46:50 test-preload-367687 kubelet[1158]: E1025 09:46:50.480090    1158 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 09:46:50 test-preload-367687 kubelet[1158]: E1025 09:46:50.480172    1158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9667a197-1299-4af3-9a5f-8249a20e725e-config-volume podName:9667a197-1299-4af3-9a5f-8249a20e725e nodeName:}" failed. No retries permitted until 2025-10-25 09:46:51.480158561 +0000 UTC m=+7.640739609 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9667a197-1299-4af3-9a5f-8249a20e725e-config-volume") pod "coredns-668d6bf9bc-klxnk" (UID: "9667a197-1299-4af3-9a5f-8249a20e725e") : object "kube-system"/"coredns" not registered
	Oct 25 09:46:51 test-preload-367687 kubelet[1158]: I1025 09:46:51.080741    1158 scope.go:117] "RemoveContainer" containerID="6e901d81506817f3290b29ec6b0512bc4de357f024c02fe5af4dbf432e037352"
	Oct 25 09:46:51 test-preload-367687 kubelet[1158]: I1025 09:46:51.376830    1158 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 25 09:46:51 test-preload-367687 kubelet[1158]: E1025 09:46:51.488796    1158 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 09:46:51 test-preload-367687 kubelet[1158]: E1025 09:46:51.488857    1158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9667a197-1299-4af3-9a5f-8249a20e725e-config-volume podName:9667a197-1299-4af3-9a5f-8249a20e725e nodeName:}" failed. No retries permitted until 2025-10-25 09:46:53.488843715 +0000 UTC m=+9.649424774 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9667a197-1299-4af3-9a5f-8249a20e725e-config-volume") pod "coredns-668d6bf9bc-klxnk" (UID: "9667a197-1299-4af3-9a5f-8249a20e725e") : object "kube-system"/"coredns" not registered
	Oct 25 09:46:54 test-preload-367687 kubelet[1158]: E1025 09:46:54.052238    1158 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761385614051933879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 09:46:54 test-preload-367687 kubelet[1158]: E1025 09:46:54.052258    1158 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761385614051933879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 09:47:03 test-preload-367687 kubelet[1158]: I1025 09:47:03.087137    1158 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 09:47:04 test-preload-367687 kubelet[1158]: E1025 09:47:04.055390    1158 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761385624054591739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 09:47:04 test-preload-367687 kubelet[1158]: E1025 09:47:04.055576    1158 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761385624054591739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3930984a07e76d24d8e206ac21e43c1eaf8ab0fb4cd5deb6195bdd02ca9942e3] <==
	I1025 09:46:51.236814       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:46:51.248393       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:46:51.250521       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [6e901d81506817f3290b29ec6b0512bc4de357f024c02fe5af4dbf432e037352] <==
	I1025 09:46:50.577681       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:46:50.588649       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-367687 -n test-preload-367687
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-367687 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-367687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-367687
--- FAIL: TestPreload (137.27s)

                                                
                                    

Test pass (287/329)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 24.62
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 15.03
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.68
22 TestOffline 55.72
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 204.14
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 10.54
35 TestAddons/parallel/Registry 16.83
36 TestAddons/parallel/RegistryCreds 0.77
38 TestAddons/parallel/InspektorGadget 6.3
39 TestAddons/parallel/MetricsServer 6.54
41 TestAddons/parallel/CSI 67.12
42 TestAddons/parallel/Headlamp 21.23
43 TestAddons/parallel/CloudSpanner 6.65
44 TestAddons/parallel/LocalPath 56.95
45 TestAddons/parallel/NvidiaDevicePlugin 6.95
46 TestAddons/parallel/Yakd 10.82
48 TestAddons/StoppedEnableDisable 77.04
49 TestCertOptions 92.76
50 TestCertExpiration 277.45
52 TestForceSystemdFlag 63.05
53 TestForceSystemdEnv 69.49
58 TestErrorSpam/setup 36.4
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.7
61 TestErrorSpam/pause 1.56
62 TestErrorSpam/unpause 1.86
63 TestErrorSpam/stop 5.38
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 58.79
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 28.52
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.12
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.76
75 TestFunctional/serial/CacheCmd/cache/add_local 2.62
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.33
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 36.42
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.49
86 TestFunctional/serial/LogsFileCmd 1.43
87 TestFunctional/serial/InvalidService 4.01
89 TestFunctional/parallel/ConfigCmd 0.42
90 TestFunctional/parallel/DashboardCmd 35.2
91 TestFunctional/parallel/DryRun 0.23
92 TestFunctional/parallel/InternationalLanguage 0.12
93 TestFunctional/parallel/StatusCmd 0.76
97 TestFunctional/parallel/ServiceCmdConnect 12.81
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 48.04
101 TestFunctional/parallel/SSHCmd 0.37
102 TestFunctional/parallel/CpCmd 1.13
103 TestFunctional/parallel/MySQL 29.05
104 TestFunctional/parallel/FileSync 0.24
105 TestFunctional/parallel/CertSync 1.16
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.37
113 TestFunctional/parallel/License 0.91
114 TestFunctional/parallel/ServiceCmd/DeployApp 9.19
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
125 TestFunctional/parallel/ProfileCmd/profile_list 0.35
126 TestFunctional/parallel/MountCmd/any-port 8.33
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
128 TestFunctional/parallel/Version/short 0.06
129 TestFunctional/parallel/Version/components 0.61
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
134 TestFunctional/parallel/ImageCommands/ImageBuild 6.12
135 TestFunctional/parallel/ImageCommands/Setup 1.8
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.3
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.75
139 TestFunctional/parallel/ServiceCmd/List 0.28
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.74
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
143 TestFunctional/parallel/MountCmd/specific-port 1.56
144 TestFunctional/parallel/ServiceCmd/Format 0.26
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
146 TestFunctional/parallel/ServiceCmd/URL 0.26
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.71
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.49
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 202.99
161 TestMultiControlPlane/serial/DeployApp 9.79
162 TestMultiControlPlane/serial/PingHostFromPods 1.33
163 TestMultiControlPlane/serial/AddWorkerNode 43.35
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.71
166 TestMultiControlPlane/serial/CopyFile 10.84
167 TestMultiControlPlane/serial/StopSecondaryNode 87.33
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
169 TestMultiControlPlane/serial/RestartSecondaryNode 36.56
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.82
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 379.25
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.58
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
174 TestMultiControlPlane/serial/StopCluster 249.78
175 TestMultiControlPlane/serial/RestartCluster 91.07
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
177 TestMultiControlPlane/serial/AddSecondaryNode 87.63
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.71
182 TestJSONOutput/start/Command 56.8
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.74
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.63
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 6.87
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.23
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 84.78
214 TestMountStart/serial/StartWithMountFirst 21.3
215 TestMountStart/serial/VerifyMountFirst 0.31
216 TestMountStart/serial/StartWithMountSecond 20.95
217 TestMountStart/serial/VerifyMountSecond 0.31
218 TestMountStart/serial/DeleteFirst 0.69
219 TestMountStart/serial/VerifyMountPostDelete 0.3
220 TestMountStart/serial/Stop 1.37
221 TestMountStart/serial/RestartStopped 19.24
222 TestMountStart/serial/VerifyMountPostStop 0.3
225 TestMultiNode/serial/FreshStart2Nodes 98.55
226 TestMultiNode/serial/DeployApp2Nodes 5.49
227 TestMultiNode/serial/PingHostFrom2Pods 0.88
228 TestMultiNode/serial/AddNode 46.36
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.47
231 TestMultiNode/serial/CopyFile 6.09
232 TestMultiNode/serial/StopNode 2.32
233 TestMultiNode/serial/StartAfterStop 39.98
234 TestMultiNode/serial/RestartKeepsNodes 307.66
235 TestMultiNode/serial/DeleteNode 2.56
236 TestMultiNode/serial/StopMultiNode 166.7
237 TestMultiNode/serial/RestartMultiNode 87.04
238 TestMultiNode/serial/ValidateNameConflict 38.77
245 TestScheduledStopUnix 109.65
249 TestRunningBinaryUpgrade 150.3
251 TestKubernetesUpgrade 202.15
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
255 TestNoKubernetes/serial/StartWithK8s 84.88
256 TestStoppedBinaryUpgrade/Setup 2.98
257 TestStoppedBinaryUpgrade/Upgrade 106.93
258 TestNoKubernetes/serial/StartWithStopK8s 45.2
259 TestNoKubernetes/serial/Start 38.64
267 TestStoppedBinaryUpgrade/MinikubeLogs 1.07
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
272 TestNoKubernetes/serial/ProfileList 0.74
273 TestNoKubernetes/serial/Stop 1.36
274 TestNoKubernetes/serial/StartNoArgs 41.97
279 TestNetworkPlugins/group/false 3.67
284 TestPause/serial/Start 83.59
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
287 TestStartStop/group/old-k8s-version/serial/FirstStart 102.1
288 TestPause/serial/SecondStartNoReconfiguration 55.19
290 TestStartStop/group/no-preload/serial/FirstStart 80.02
291 TestPause/serial/Pause 0.81
292 TestPause/serial/VerifyStatus 0.24
293 TestPause/serial/Unpause 0.74
294 TestPause/serial/PauseAgain 0.9
295 TestPause/serial/DeletePaused 1.11
296 TestPause/serial/VerifyDeletedResources 0.65
298 TestStartStop/group/embed-certs/serial/FirstStart 51.49
299 TestStartStop/group/old-k8s-version/serial/DeployApp 11.39
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.32
301 TestStartStop/group/old-k8s-version/serial/Stop 86.16
302 TestStartStop/group/no-preload/serial/DeployApp 11.31
303 TestStartStop/group/embed-certs/serial/DeployApp 10.27
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
305 TestStartStop/group/no-preload/serial/Stop 74.66
306 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
307 TestStartStop/group/embed-certs/serial/Stop 77.81
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
309 TestStartStop/group/old-k8s-version/serial/SecondStart 44.42
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.37
311 TestStartStop/group/no-preload/serial/SecondStart 65.54
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
313 TestStartStop/group/embed-certs/serial/SecondStart 57.44
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.75
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 17.01
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
319 TestStartStop/group/old-k8s-version/serial/Pause 3.97
321 TestStartStop/group/newest-cni/serial/FirstStart 60.07
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 17.01
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
326 TestStartStop/group/no-preload/serial/Pause 3.25
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
328 TestNetworkPlugins/group/auto/Start 59.36
329 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
330 TestStartStop/group/embed-certs/serial/Pause 3.18
331 TestNetworkPlugins/group/kindnet/Start 74.61
332 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.32
333 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.06
335 TestStartStop/group/newest-cni/serial/Stop 7.96
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.24
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 83.58
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.81
339 TestStartStop/group/newest-cni/serial/SecondStart 47.98
340 TestNetworkPlugins/group/auto/KubeletFlags 0.19
341 TestNetworkPlugins/group/auto/NetCatPod 12.26
342 TestNetworkPlugins/group/auto/DNS 0.16
343 TestNetworkPlugins/group/auto/Localhost 0.15
344 TestNetworkPlugins/group/auto/HairPin 0.14
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.2
348 TestStartStop/group/newest-cni/serial/Pause 2.5
349 TestNetworkPlugins/group/calico/Start 94.84
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/custom-flannel/Start 91.66
352 TestNetworkPlugins/group/kindnet/KubeletFlags 0.17
353 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
354 TestNetworkPlugins/group/kindnet/DNS 0.14
355 TestNetworkPlugins/group/kindnet/Localhost 0.13
356 TestNetworkPlugins/group/kindnet/HairPin 0.13
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
358 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 65.06
359 TestNetworkPlugins/group/enable-default-cni/Start 87.96
360 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
361 TestNetworkPlugins/group/calico/ControllerPod 6.01
362 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
363 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.29
364 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
365 TestNetworkPlugins/group/calico/KubeletFlags 0.22
366 TestNetworkPlugins/group/calico/NetCatPod 11.32
367 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
368 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.95
369 TestNetworkPlugins/group/flannel/Start 72.67
370 TestNetworkPlugins/group/custom-flannel/DNS 0.19
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.23
372 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
373 TestNetworkPlugins/group/calico/DNS 0.17
374 TestNetworkPlugins/group/calico/Localhost 0.14
375 TestNetworkPlugins/group/calico/HairPin 0.14
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.34
378 TestNetworkPlugins/group/bridge/Start 56.13
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
384 TestNetworkPlugins/group/bridge/NetCatPod 11.24
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
386 TestNetworkPlugins/group/flannel/NetCatPod 10.26
387 TestNetworkPlugins/group/bridge/DNS 0.15
388 TestNetworkPlugins/group/bridge/Localhost 0.12
389 TestNetworkPlugins/group/bridge/HairPin 0.12
390 TestNetworkPlugins/group/flannel/DNS 0.15
391 TestNetworkPlugins/group/flannel/Localhost 0.12
392 TestNetworkPlugins/group/flannel/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (24.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-484807 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-484807 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.620090559s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (24.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1025 08:55:11.506166  107766 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1025 08:55:11.506271  107766 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-103842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-484807
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-484807: exit status 85 (78.047067ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-484807 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-484807 │ jenkins │ v1.37.0 │ 25 Oct 25 08:54 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:54:46
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:54:46.940200  107778 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:54:46.940460  107778 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:54:46.940470  107778 out.go:374] Setting ErrFile to fd 2...
	I1025 08:54:46.940475  107778 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:54:46.940648  107778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
	W1025 08:54:46.940810  107778 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21794-103842/.minikube/config/config.json: open /home/jenkins/minikube-integration/21794-103842/.minikube/config/config.json: no such file or directory
	I1025 08:54:46.941307  107778 out.go:368] Setting JSON to true
	I1025 08:54:46.942146  107778 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2228,"bootTime":1761380259,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:54:46.942243  107778 start.go:141] virtualization: kvm guest
	I1025 08:54:46.944452  107778 out.go:99] [download-only-484807] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1025 08:54:46.944612  107778 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21794-103842/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 08:54:46.944669  107778 notify.go:220] Checking for updates...
	I1025 08:54:46.946188  107778 out.go:171] MINIKUBE_LOCATION=21794
	I1025 08:54:46.947644  107778 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:54:46.948961  107778 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21794-103842/kubeconfig
	I1025 08:54:46.950500  107778 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-103842/.minikube
	I1025 08:54:46.951965  107778 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 08:54:46.954450  107778 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 08:54:46.954711  107778 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:54:47.440026  107778 out.go:99] Using the kvm2 driver based on user configuration
	I1025 08:54:47.440077  107778 start.go:305] selected driver: kvm2
	I1025 08:54:47.440085  107778 start.go:925] validating driver "kvm2" against <nil>
	I1025 08:54:47.440428  107778 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:54:47.440906  107778 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1025 08:54:47.441042  107778 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 08:54:47.441066  107778 cni.go:84] Creating CNI manager for ""
	I1025 08:54:47.441102  107778 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 08:54:47.441112  107778 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 08:54:47.441154  107778 start.go:349] cluster config:
	{Name:download-only-484807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-484807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:54:47.441331  107778 iso.go:125] acquiring lock: {Name:mk13c1ce3bc6ed883268d1bbc558e3c5c7b2ab77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 08:54:47.443190  107778 out.go:99] Downloading VM boot image ...
	I1025 08:54:47.443241  107778 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21794-103842/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1025 08:54:59.071508  107778 out.go:99] Starting "download-only-484807" primary control-plane node in "download-only-484807" cluster
	I1025 08:54:59.071541  107778 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 08:54:59.166099  107778 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1025 08:54:59.166137  107778 cache.go:58] Caching tarball of preloaded images
	I1025 08:54:59.166314  107778 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 08:54:59.168006  107778 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1025 08:54:59.168024  107778 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1025 08:54:59.277144  107778 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1025 08:54:59.277274  107778 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21794-103842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-484807 host does not exist
	  To start a cluster, run: "minikube start -p download-only-484807"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-484807
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (15.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-633428 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-633428 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (15.034465626s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (15.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1025 08:55:26.934524  107766 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1025 08:55:26.934573  107766 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-103842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-633428
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-633428: exit status 85 (80.300359ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-484807 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-484807 │ jenkins │ v1.37.0 │ 25 Oct 25 08:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 25 Oct 25 08:55 UTC │ 25 Oct 25 08:55 UTC │
	│ delete  │ -p download-only-484807                                                                                                                                                 │ download-only-484807 │ jenkins │ v1.37.0 │ 25 Oct 25 08:55 UTC │ 25 Oct 25 08:55 UTC │
	│ start   │ -o=json --download-only -p download-only-633428 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-633428 │ jenkins │ v1.37.0 │ 25 Oct 25 08:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:55:11
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:55:11.955208  108033 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:55:11.955499  108033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:55:11.955510  108033 out.go:374] Setting ErrFile to fd 2...
	I1025 08:55:11.955516  108033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:55:11.955743  108033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
	I1025 08:55:11.956235  108033 out.go:368] Setting JSON to true
	I1025 08:55:11.957129  108033 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2253,"bootTime":1761380259,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:55:11.957224  108033 start.go:141] virtualization: kvm guest
	I1025 08:55:11.959458  108033 out.go:99] [download-only-633428] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 08:55:11.959697  108033 notify.go:220] Checking for updates...
	I1025 08:55:11.961142  108033 out.go:171] MINIKUBE_LOCATION=21794
	I1025 08:55:11.962698  108033 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:55:11.964640  108033 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21794-103842/kubeconfig
	I1025 08:55:11.966422  108033 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-103842/.minikube
	I1025 08:55:11.968177  108033 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 08:55:11.971066  108033 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 08:55:11.971355  108033 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:55:12.005951  108033 out.go:99] Using the kvm2 driver based on user configuration
	I1025 08:55:12.005996  108033 start.go:305] selected driver: kvm2
	I1025 08:55:12.006005  108033 start.go:925] validating driver "kvm2" against <nil>
	I1025 08:55:12.006464  108033 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:55:12.007247  108033 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1025 08:55:12.007506  108033 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 08:55:12.007542  108033 cni.go:84] Creating CNI manager for ""
	I1025 08:55:12.007612  108033 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 08:55:12.007632  108033 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 08:55:12.007692  108033 start.go:349] cluster config:
	{Name:download-only-633428 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-633428 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:55:12.007841  108033 iso.go:125] acquiring lock: {Name:mk13c1ce3bc6ed883268d1bbc558e3c5c7b2ab77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 08:55:12.009277  108033 out.go:99] Starting "download-only-633428" primary control-plane node in "download-only-633428" cluster
	I1025 08:55:12.009297  108033 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:55:12.944955  108033 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 08:55:12.945001  108033 cache.go:58] Caching tarball of preloaded images
	I1025 08:55:12.945736  108033 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:55:12.947822  108033 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1025 08:55:12.947850  108033 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1025 08:55:13.048662  108033 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1025 08:55:13.048718  108033 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21794-103842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-633428 host does not exist
	  To start a cluster, run: "minikube start -p download-only-633428"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-633428
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I1025 08:55:27.618667  107766 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-029542 --alsologtostderr --binary-mirror http://127.0.0.1:39567 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-029542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-029542
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (55.72s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-916060 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-916060 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (54.839227569s)
helpers_test.go:175: Cleaning up "offline-crio-916060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-916060
--- PASS: TestOffline (55.72s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-887867
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-887867: exit status 85 (69.986294ms)

                                                
                                                
-- stdout --
	* Profile "addons-887867" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-887867"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-887867
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-887867: exit status 85 (69.480338ms)

                                                
                                                
-- stdout --
	* Profile "addons-887867" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-887867"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (204.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-887867 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-887867 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m24.140450996s)
--- PASS: TestAddons/Setup (204.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-887867 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-887867 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-887867 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-887867 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4f578459-4fa3-4bbc-9671-7d3b637a2250] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4f578459-4fa3-4bbc-9671-7d3b637a2250] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.005300133s
addons_test.go:694: (dbg) Run:  kubectl --context addons-887867 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-887867 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-887867 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.99409ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-7dz5f" [1edd293c-e746-4c50-959c-670be14152eb] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006025287s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-m5q4j" [270a52d1-0da0-45c7-a5df-ca1ec37ad476] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004689701s
addons_test.go:392: (dbg) Run:  kubectl --context addons-887867 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-887867 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-887867 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.039518705s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 ip
2025/10/25 08:59:28 [DEBUG] GET http://192.168.39.204:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.83s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.852036ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-887867
addons_test.go:332: (dbg) Run:  kubectl --context addons-887867 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-vnf5t" [989130a8-931c-4aed-a69e-d0ab4dac2a74] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004261244s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 11.082125ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-ghqsd" [518fa040-cf86-462a-b880-49bbd614627a] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007698952s
addons_test.go:463: (dbg) Run:  kubectl --context addons-887867 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-887867 addons disable metrics-server --alsologtostderr -v=1: (1.433310353s)
--- PASS: TestAddons/parallel/MetricsServer (6.54s)

                                                
                                    
x
+
TestAddons/parallel/CSI (67.12s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1025 08:59:28.753788  107766 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1025 08:59:28.760682  107766 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1025 08:59:28.760711  107766 kapi.go:107] duration metric: took 6.945468ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.956256ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-887867 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-887867 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [acf56cfd-20c9-4066-b5c9-c2f8e06abfe4] Pending
helpers_test.go:352: "task-pv-pod" [acf56cfd-20c9-4066-b5c9-c2f8e06abfe4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [acf56cfd-20c9-4066-b5c9-c2f8e06abfe4] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.003990827s
addons_test.go:572: (dbg) Run:  kubectl --context addons-887867 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-887867 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-887867 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-887867 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-887867 delete pod task-pv-pod: (1.201497957s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-887867 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-887867 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-887867 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [97125a1f-12b8-4cb9-b748-ace4113c8ec8] Pending
helpers_test.go:352: "task-pv-pod-restore" [97125a1f-12b8-4cb9-b748-ace4113c8ec8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [97125a1f-12b8-4cb9-b748-ace4113c8ec8] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004557786s
addons_test.go:614: (dbg) Run:  kubectl --context addons-887867 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-887867 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-887867 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-887867 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.062640816s)
--- PASS: TestAddons/parallel/CSI (67.12s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-887867 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-trkc4" [1e8d5d5a-0824-4535-89e1-5ae5386aab89] Pending
helpers_test.go:352: "headlamp-6945c6f4d-trkc4" [1e8d5d5a-0824-4535-89e1-5ae5386aab89] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-trkc4" [1e8d5d5a-0824-4535-89e1-5ae5386aab89] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004150456s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-887867 addons disable headlamp --alsologtostderr -v=1: (6.273208117s)
--- PASS: TestAddons/parallel/Headlamp (21.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-78h4l" [94fe5be3-da60-4dcb-9a86-a8161d1f1bfb] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003879711s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.95s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-887867 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-887867 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-887867 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [3a314081-acf5-475c-a00d-b1231ab11ef9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [3a314081-acf5-475c-a00d-b1231ab11ef9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [3a314081-acf5-475c-a00d-b1231ab11ef9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.004662514s
addons_test.go:967: (dbg) Run:  kubectl --context addons-887867 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 ssh "cat /opt/local-path-provisioner/pvc-11aeedcd-875b-4940-9537-fce0630e7a57_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-887867 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-887867 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-887867 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.098225881s)
--- PASS: TestAddons/parallel/LocalPath (56.95s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.95s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-pmvsc" [1a7a19ae-d10d-485f-a8b7-b25acbe309b2] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.106621576s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.95s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-vgwbz" [0c37aff9-2419-4dfe-9096-5a92159ff4ad] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004639381s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-887867 addons disable yakd --alsologtostderr -v=1: (5.813544714s)
--- PASS: TestAddons/parallel/Yakd (10.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (77.04s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-887867
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-887867: (1m16.827402135s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-887867
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-887867
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-887867
--- PASS: TestAddons/StoppedEnableDisable (77.04s)

                                                
                                    
x
+
TestCertOptions (92.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-852212 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-852212 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m31.31921952s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-852212 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-852212 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-852212 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-852212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-852212
--- PASS: TestCertOptions (92.76s)

                                                
                                    
x
+
TestCertExpiration (277.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-230110 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-230110 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m9.136729166s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-230110 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-230110 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (27.439502649s)
helpers_test.go:175: Cleaning up "cert-expiration-230110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-230110
--- PASS: TestCertExpiration (277.45s)

                                                
                                    
x
+
TestForceSystemdFlag (63.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-341224 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1025 09:51:37.048603  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-341224 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m1.982879606s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-341224 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-341224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-341224
--- PASS: TestForceSystemdFlag (63.05s)

                                                
                                    
x
+
TestForceSystemdEnv (69.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-246007 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-246007 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m8.459275721s)
helpers_test.go:175: Cleaning up "force-systemd-env-246007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-246007
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-246007: (1.028101866s)
--- PASS: TestForceSystemdEnv (69.49s)

                                                
                                    
x
+
TestErrorSpam/setup (36.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-926213 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-926213 --driver=kvm2  --container-runtime=crio
E1025 09:03:53.203483  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:03:53.209938  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:03:53.221397  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:03:53.242844  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:03:53.284332  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:03:53.366191  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:03:53.527799  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:03:53.849549  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:03:54.491672  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:03:55.773298  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:03:58.336291  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:04:03.458379  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-926213 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-926213 --driver=kvm2  --container-runtime=crio: (36.40173285s)
--- PASS: TestErrorSpam/setup (36.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (5.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 stop: (2.12358959s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 stop: (1.50299469s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 stop
E1025 09:04:13.700112  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-926213 --log_dir /tmp/nospam-926213 stop: (1.751938986s)
--- PASS: TestErrorSpam/stop (5.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21794-103842/.minikube/files/etc/test/nested/copy/107766/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.79s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-494713 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1025 09:04:34.182380  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-494713 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (58.786781283s)
--- PASS: TestFunctional/serial/StartWithProxy (58.79s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.52s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1025 09:05:14.521707  107766 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-494713 --alsologtostderr -v=8
E1025 09:05:15.144412  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-494713 --alsologtostderr -v=8: (28.519847067s)
functional_test.go:678: soft start took 28.520763678s for "functional-494713" cluster.
I1025 09:05:43.041902  107766 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (28.52s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-494713 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-494713 cache add registry.k8s.io/pause:3.1: (1.505747067s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-494713 cache add registry.k8s.io/pause:3.3: (1.633467637s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-494713 cache add registry.k8s.io/pause:latest: (1.620680183s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-494713 /tmp/TestFunctionalserialCacheCmdcacheadd_local58010929/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 cache add minikube-local-cache-test:functional-494713
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-494713 cache add minikube-local-cache-test:functional-494713: (2.260781812s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 cache delete minikube-local-cache-test:functional-494713
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-494713
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-494713 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (185.315677ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-494713 cache reload: (1.71308819s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 kubectl -- --context functional-494713 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-494713 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-494713 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-494713 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.423937494s)
functional_test.go:776: restart took 36.424086027s for "functional-494713" cluster.
I1025 09:06:30.036439  107766 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (36.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-494713 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-494713 logs: (1.492369135s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 logs --file /tmp/TestFunctionalserialLogsFileCmd2164790801/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-494713 logs --file /tmp/TestFunctionalserialLogsFileCmd2164790801/001/logs.txt: (1.429676772s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-494713 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-494713
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-494713: exit status 115 (247.083028ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.175:32250 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-494713 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-494713 config get cpus: exit status 14 (74.483607ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-494713 config get cpus: exit status 14 (61.706466ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (35.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-494713 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-494713 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 114234: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (35.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-494713 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-494713 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (114.23523ms)

                                                
                                                
-- stdout --
	* [functional-494713] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-103842/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-103842/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:06:39.260797  113441 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:06:39.261059  113441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:06:39.261068  113441 out.go:374] Setting ErrFile to fd 2...
	I1025 09:06:39.261072  113441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:06:39.261250  113441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
	I1025 09:06:39.261677  113441 out.go:368] Setting JSON to false
	I1025 09:06:39.262507  113441 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2940,"bootTime":1761380259,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:06:39.262615  113441 start.go:141] virtualization: kvm guest
	I1025 09:06:39.264831  113441 out.go:179] * [functional-494713] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:06:39.266243  113441 notify.go:220] Checking for updates...
	I1025 09:06:39.266259  113441 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:06:39.267727  113441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:06:39.269108  113441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-103842/kubeconfig
	I1025 09:06:39.270392  113441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-103842/.minikube
	I1025 09:06:39.271517  113441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:06:39.272990  113441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:06:39.274888  113441 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:06:39.275492  113441 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:06:39.307489  113441 out.go:179] * Using the kvm2 driver based on existing profile
	I1025 09:06:39.308921  113441 start.go:305] selected driver: kvm2
	I1025 09:06:39.308942  113441 start.go:925] validating driver "kvm2" against &{Name:functional-494713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-494713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:06:39.309079  113441 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:06:39.311552  113441 out.go:203] 
	W1025 09:06:39.312851  113441 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 09:06:39.314071  113441 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-494713 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-494713 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-494713 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (118.60509ms)

                                                
                                                
-- stdout --
	* [functional-494713] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-103842/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-103842/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:06:39.493240  113473 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:06:39.493511  113473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:06:39.493522  113473 out.go:374] Setting ErrFile to fd 2...
	I1025 09:06:39.493528  113473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:06:39.493851  113473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
	I1025 09:06:39.494305  113473 out.go:368] Setting JSON to false
	I1025 09:06:39.495167  113473 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2940,"bootTime":1761380259,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:06:39.495263  113473 start.go:141] virtualization: kvm guest
	I1025 09:06:39.497021  113473 out.go:179] * [functional-494713] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1025 09:06:39.498190  113473 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:06:39.498202  113473 notify.go:220] Checking for updates...
	I1025 09:06:39.500299  113473 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:06:39.501662  113473 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-103842/kubeconfig
	I1025 09:06:39.503056  113473 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-103842/.minikube
	I1025 09:06:39.504605  113473 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:06:39.506158  113473 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:06:39.508138  113473 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:06:39.508715  113473 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:06:39.542546  113473 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1025 09:06:39.543979  113473 start.go:305] selected driver: kvm2
	I1025 09:06:39.543997  113473 start.go:925] validating driver "kvm2" against &{Name:functional-494713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-494713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:06:39.544133  113473 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:06:39.546613  113473 out.go:203] 
	W1025 09:06:39.548320  113473 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 09:06:39.549803  113473 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-494713 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-494713 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-wwzbt" [3a14c0a2-d9e6-4b97-8731-bec95cd325e6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-wwzbt" [3a14c0a2-d9e6-4b97-8731-bec95cd325e6] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.201267169s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.175:31504
functional_test.go:1680: http://192.168.39.175:31504: success! body:
Request served by hello-node-connect-7d85dfc575-wwzbt

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.175:31504
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [52c854d6-0a9e-479d-9e45-28abe42916c3] Running
E1025 09:06:37.066571  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006011727s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-494713 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-494713 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-494713 get pvc myclaim -o=json
I1025 09:06:43.336629  107766 retry.go:31] will retry after 2.721767362s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:3cb1d904-2d98-435d-9b47-2c6a0e1e0f48 ResourceVersion:718 Generation:0 CreationTimestamp:2025-10-25 09:06:43 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0019e0060 VolumeMode:0xc0019e0070 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-494713 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-494713 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f42ac1d7-8cea-4c74-a768-b324c9384838] Pending
helpers_test.go:352: "sp-pod" [f42ac1d7-8cea-4c74-a768-b324c9384838] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [f42ac1d7-8cea-4c74-a768-b324c9384838] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004431392s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-494713 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-494713 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-494713 delete -f testdata/storage-provisioner/pod.yaml: (2.386238753s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-494713 apply -f testdata/storage-provisioner/pod.yaml
I1025 09:07:03.951818  107766 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4284c758-0b77-4c4b-8df9-eb7f467cca9e] Pending
helpers_test.go:352: "sp-pod" [4284c758-0b77-4c4b-8df9-eb7f467cca9e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [4284c758-0b77-4c4b-8df9-eb7f467cca9e] Running
2025/10/25 09:07:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004022607s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-494713 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh -n functional-494713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 cp functional-494713:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3801518689/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh -n functional-494713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh -n functional-494713 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-494713 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-6jz7b" [bf4fb005-f4e2-4c05-bb6a-40d94743f0a1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-6jz7b" [bf4fb005-f4e2-4c05-bb6a-40d94743f0a1] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.012344792s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-494713 exec mysql-5bb876957f-6jz7b -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-494713 exec mysql-5bb876957f-6jz7b -- mysql -ppassword -e "show databases;": exit status 1 (258.426566ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1025 09:07:13.797092  107766 retry.go:31] will retry after 953.034377ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-494713 exec mysql-5bb876957f-6jz7b -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-494713 exec mysql-5bb876957f-6jz7b -- mysql -ppassword -e "show databases;": exit status 1 (160.452701ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1025 09:07:14.911540  107766 retry.go:31] will retry after 1.000133886s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-494713 exec mysql-5bb876957f-6jz7b -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-494713 exec mysql-5bb876957f-6jz7b -- mysql -ppassword -e "show databases;": exit status 1 (117.526721ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1025 09:07:16.030367  107766 retry.go:31] will retry after 3.21188306s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-494713 exec mysql-5bb876957f-6jz7b -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/107766/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "sudo cat /etc/test/nested/copy/107766/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/107766.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "sudo cat /etc/ssl/certs/107766.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/107766.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "sudo cat /usr/share/ca-certificates/107766.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1077662.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "sudo cat /etc/ssl/certs/1077662.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1077662.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "sudo cat /usr/share/ca-certificates/1077662.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-494713 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-494713 ssh "sudo systemctl is-active docker": exit status 1 (195.737633ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-494713 ssh "sudo systemctl is-active containerd": exit status 1 (169.778944ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-494713 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-494713 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-56nfw" [6829ba41-e3cd-468d-9486-de50ee087b15] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-56nfw" [6829ba41-e3cd-468d-9486-de50ee087b15] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004691561s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "282.019451ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "64.445002ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-494713 /tmp/TestFunctionalparallelMountCmdany-port1944495080/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761383198677450816" to /tmp/TestFunctionalparallelMountCmdany-port1944495080/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761383198677450816" to /tmp/TestFunctionalparallelMountCmdany-port1944495080/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761383198677450816" to /tmp/TestFunctionalparallelMountCmdany-port1944495080/001/test-1761383198677450816
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-494713 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (170.083197ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:06:38.848008  107766 retry.go:31] will retry after 636.871627ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 09:06 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 09:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 09:06 test-1761383198677450816
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh cat /mount-9p/test-1761383198677450816
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-494713 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [b69affc1-785d-471b-8854-4eb1210faac7] Pending
helpers_test.go:352: "busybox-mount" [b69affc1-785d-471b-8854-4eb1210faac7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [b69affc1-785d-471b-8854-4eb1210faac7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [b69affc1-785d-471b-8854-4eb1210faac7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.006314103s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-494713 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-494713 /tmp/TestFunctionalparallelMountCmdany-port1944495080/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "249.255887ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "63.343781ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-494713 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-494713
localhost/kicbase/echo-server:functional-494713
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-494713 image ls --format short --alsologtostderr:
I1025 09:07:03.063173  114378 out.go:360] Setting OutFile to fd 1 ...
I1025 09:07:03.063428  114378 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:07:03.063436  114378 out.go:374] Setting ErrFile to fd 2...
I1025 09:07:03.063441  114378 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:07:03.063620  114378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
I1025 09:07:03.064204  114378 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:07:03.064299  114378 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:07:03.066377  114378 ssh_runner.go:195] Run: systemctl --version
I1025 09:07:03.069029  114378 main.go:141] libmachine: domain functional-494713 has defined MAC address 52:54:00:ac:90:ff in network mk-functional-494713
I1025 09:07:03.069447  114378 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:90:ff", ip: ""} in network mk-functional-494713: {Iface:virbr1 ExpiryTime:2025-10-25 10:04:31 +0000 UTC Type:0 Mac:52:54:00:ac:90:ff Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-494713 Clientid:01:52:54:00:ac:90:ff}
I1025 09:07:03.069474  114378 main.go:141] libmachine: domain functional-494713 has defined IP address 192.168.39.175 and MAC address 52:54:00:ac:90:ff in network mk-functional-494713
I1025 09:07:03.069605  114378 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/functional-494713/id_rsa Username:docker}
I1025 09:07:03.165958  114378 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-494713 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-494713  │ 1496ce816a84a │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-494713  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/nginx                 │ latest             │ 657fdcd1c3659 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ localhost/my-image                      │ functional-494713  │ 2e912f8d781c7 │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-494713 image ls --format table --alsologtostderr:
I1025 09:07:09.909283  114469 out.go:360] Setting OutFile to fd 1 ...
I1025 09:07:09.909559  114469 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:07:09.909570  114469 out.go:374] Setting ErrFile to fd 2...
I1025 09:07:09.909576  114469 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:07:09.909822  114469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
I1025 09:07:09.910422  114469 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:07:09.910542  114469 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:07:09.912555  114469 ssh_runner.go:195] Run: systemctl --version
I1025 09:07:09.915294  114469 main.go:141] libmachine: domain functional-494713 has defined MAC address 52:54:00:ac:90:ff in network mk-functional-494713
I1025 09:07:09.915868  114469 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:90:ff", ip: ""} in network mk-functional-494713: {Iface:virbr1 ExpiryTime:2025-10-25 10:04:31 +0000 UTC Type:0 Mac:52:54:00:ac:90:ff Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-494713 Clientid:01:52:54:00:ac:90:ff}
I1025 09:07:09.915919  114469 main.go:141] libmachine: domain functional-494713 has defined IP address 192.168.39.175 and MAC address 52:54:00:ac:90:ff in network mk-functional-494713
I1025 09:07:09.916127  114469 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/functional-494713/id_rsa Username:docker}
I1025 09:07:10.019942  114469 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-494713 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"1496ce816a84a56326a2528274b8ae00bd03a60e99db8737d9e8a19daeb6b05d","repoDigests":["localhost/minikube-local-cache-test@sha256:071ac2021f032e9b4a1161f6c3a2abc27b119826dc100bd30cbf78f826cddb5f"],"repoTags":["localhost/minikube-local-cache-test:functional-494713"],"size":"3330"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha
256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870
d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8"],"repoTags":["docker.io/library/nginx:latest"],"size":"155467611"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"
},{"id":"2e912f8d781c757df586ab0c9a1f383d8ab318be92e0828e6952ae9829277fd1","repoDigests":["localhost/my-image@sha256:8f12f2a21eaaa6522402e75991dee5ce3f6553f0dfb33405030ac6a2b22ba06a"],"repoTags":["localhost/my-image:functional-494713"],"size":"1468600"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"}
,{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"54bdf96a89f81069dd993643ae7202f4b91dcab1803970d0b0f17b2e304b411d","repoDigests":["docker.io/library/7606c308a4f039c24985de14f222081f3f0decedbf1f326e9d0add9f8afcbc0c-tmp@sha256:8c38bbf123ba17f4250be909459b0311329899afde6fb2f66e1004cf7c479a2d"],"repoTags":[],"size":"1466018"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ec
e6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80
002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-494713"],"size":"4944818"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-494713 image ls --format json --alsologtostderr:
I1025 09:07:09.668753  114458 out.go:360] Setting OutFile to fd 1 ...
I1025 09:07:09.669066  114458 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:07:09.669076  114458 out.go:374] Setting ErrFile to fd 2...
I1025 09:07:09.669081  114458 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:07:09.669360  114458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
I1025 09:07:09.670018  114458 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:07:09.670132  114458 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:07:09.672578  114458 ssh_runner.go:195] Run: systemctl --version
I1025 09:07:09.675091  114458 main.go:141] libmachine: domain functional-494713 has defined MAC address 52:54:00:ac:90:ff in network mk-functional-494713
I1025 09:07:09.675583  114458 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:90:ff", ip: ""} in network mk-functional-494713: {Iface:virbr1 ExpiryTime:2025-10-25 10:04:31 +0000 UTC Type:0 Mac:52:54:00:ac:90:ff Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-494713 Clientid:01:52:54:00:ac:90:ff}
I1025 09:07:09.675620  114458 main.go:141] libmachine: domain functional-494713 has defined IP address 192.168.39.175 and MAC address 52:54:00:ac:90:ff in network mk-functional-494713
I1025 09:07:09.675794  114458 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/functional-494713/id_rsa Username:docker}
I1025 09:07:09.770548  114458 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-494713 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 1496ce816a84a56326a2528274b8ae00bd03a60e99db8737d9e8a19daeb6b05d
repoDigests:
- localhost/minikube-local-cache-test@sha256:071ac2021f032e9b4a1161f6c3a2abc27b119826dc100bd30cbf78f826cddb5f
repoTags:
- localhost/minikube-local-cache-test:functional-494713
size: "3330"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8
repoTags:
- docker.io/library/nginx:latest
size: "155467611"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-494713
size: "4944818"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-494713 image ls --format yaml --alsologtostderr:
I1025 09:07:03.312066  114389 out.go:360] Setting OutFile to fd 1 ...
I1025 09:07:03.312359  114389 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:07:03.312369  114389 out.go:374] Setting ErrFile to fd 2...
I1025 09:07:03.312373  114389 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:07:03.312560  114389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
I1025 09:07:03.313157  114389 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:07:03.313264  114389 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:07:03.315676  114389 ssh_runner.go:195] Run: systemctl --version
I1025 09:07:03.318910  114389 main.go:141] libmachine: domain functional-494713 has defined MAC address 52:54:00:ac:90:ff in network mk-functional-494713
I1025 09:07:03.319426  114389 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:90:ff", ip: ""} in network mk-functional-494713: {Iface:virbr1 ExpiryTime:2025-10-25 10:04:31 +0000 UTC Type:0 Mac:52:54:00:ac:90:ff Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-494713 Clientid:01:52:54:00:ac:90:ff}
I1025 09:07:03.319456  114389 main.go:141] libmachine: domain functional-494713 has defined IP address 192.168.39.175 and MAC address 52:54:00:ac:90:ff in network mk-functional-494713
I1025 09:07:03.319652  114389 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/functional-494713/id_rsa Username:docker}
I1025 09:07:03.413030  114389 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-494713 ssh pgrep buildkitd: exit status 1 (228.226265ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image build -t localhost/my-image:functional-494713 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-494713 image build -t localhost/my-image:functional-494713 testdata/build --alsologtostderr: (5.666577151s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-494713 image build -t localhost/my-image:functional-494713 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 54bdf96a89f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-494713
--> 2e912f8d781
Successfully tagged localhost/my-image:functional-494713
2e912f8d781c757df586ab0c9a1f383d8ab318be92e0828e6952ae9829277fd1
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-494713 image build -t localhost/my-image:functional-494713 testdata/build --alsologtostderr:
I1025 09:07:03.777102  114410 out.go:360] Setting OutFile to fd 1 ...
I1025 09:07:03.777395  114410 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:07:03.777405  114410 out.go:374] Setting ErrFile to fd 2...
I1025 09:07:03.777409  114410 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:07:03.777637  114410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
I1025 09:07:03.778260  114410 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:07:03.778977  114410 config.go:182] Loaded profile config "functional-494713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:07:03.781654  114410 ssh_runner.go:195] Run: systemctl --version
I1025 09:07:03.784747  114410 main.go:141] libmachine: domain functional-494713 has defined MAC address 52:54:00:ac:90:ff in network mk-functional-494713
I1025 09:07:03.785229  114410 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:90:ff", ip: ""} in network mk-functional-494713: {Iface:virbr1 ExpiryTime:2025-10-25 10:04:31 +0000 UTC Type:0 Mac:52:54:00:ac:90:ff Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-494713 Clientid:01:52:54:00:ac:90:ff}
I1025 09:07:03.785261  114410 main.go:141] libmachine: domain functional-494713 has defined IP address 192.168.39.175 and MAC address 52:54:00:ac:90:ff in network mk-functional-494713
I1025 09:07:03.785414  114410 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/functional-494713/id_rsa Username:docker}
I1025 09:07:03.896203  114410 build_images.go:161] Building image from path: /tmp/build.316378016.tar
I1025 09:07:03.896293  114410 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 09:07:03.922689  114410 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.316378016.tar
I1025 09:07:03.932831  114410 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.316378016.tar: stat -c "%s %y" /var/lib/minikube/build/build.316378016.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.316378016.tar': No such file or directory
I1025 09:07:03.932878  114410 ssh_runner.go:362] scp /tmp/build.316378016.tar --> /var/lib/minikube/build/build.316378016.tar (3072 bytes)
I1025 09:07:03.985886  114410 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.316378016
I1025 09:07:04.012047  114410 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.316378016 -xf /var/lib/minikube/build/build.316378016.tar
I1025 09:07:04.033333  114410 crio.go:315] Building image: /var/lib/minikube/build/build.316378016
I1025 09:07:04.033406  114410 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-494713 /var/lib/minikube/build/build.316378016 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1025 09:07:09.337657  114410 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-494713 /var/lib/minikube/build/build.316378016 --cgroup-manager=cgroupfs: (5.304215461s)
I1025 09:07:09.337829  114410 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.316378016
I1025 09:07:09.357640  114410 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.316378016.tar
I1025 09:07:09.376057  114410 build_images.go:217] Built localhost/my-image:functional-494713 from /tmp/build.316378016.tar
I1025 09:07:09.376106  114410 build_images.go:133] succeeded building to: functional-494713
I1025 09:07:09.376111  114410 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.775576486s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-494713
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image load --daemon kicbase/echo-server:functional-494713 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-494713 image load --daemon kicbase/echo-server:functional-494713 --alsologtostderr: (1.082661192s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image load --daemon kicbase/echo-server:functional-494713 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-494713
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image load --daemon kicbase/echo-server:functional-494713 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 service list
I1025 09:06:46.257161  107766 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 service list -o json
functional_test.go:1504: Took "325.440293ms" to run "out/minikube-linux-amd64 -p functional-494713 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image save kicbase/echo-server:functional-494713 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.175:32701
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-494713 /tmp/TestFunctionalparallelMountCmdspecific-port2772341239/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-494713 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (222.136698ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:06:47.234585  107766 retry.go:31] will retry after 587.049099ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-494713 /tmp/TestFunctionalparallelMountCmdspecific-port2772341239/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-494713 ssh "sudo umount -f /mount-9p": exit status 1 (171.206954ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-494713 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-494713 /tmp/TestFunctionalparallelMountCmdspecific-port2772341239/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image rm kicbase/echo-server:functional-494713 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.175:32701
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-494713
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 image save --daemon kicbase/echo-server:functional-494713 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-494713
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-494713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2779324215/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-494713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2779324215/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-494713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2779324215/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-494713 ssh "findmnt -T" /mount1: exit status 1 (208.882216ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:06:48.783986  107766 retry.go:31] will retry after 539.975743ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-494713 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-494713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2779324215/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-494713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2779324215/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-494713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2779324215/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-494713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-494713
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-494713
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-494713
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (202.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1025 09:08:53.208143  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:09:20.908752  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-128181 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m22.384309831s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (202.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-128181 kubectl -- rollout status deployment/busybox: (7.406943486s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-8tnmb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-drjsc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-rfjw5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-8tnmb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-drjsc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-rfjw5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-8tnmb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-drjsc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-rfjw5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-8tnmb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-8tnmb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-drjsc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-drjsc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-rfjw5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 kubectl -- exec busybox-7b57f96db7-rfjw5 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 node add --alsologtostderr -v 5
E1025 09:11:37.049218  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:11:37.055627  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:11:37.067053  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:11:37.088570  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:11:37.130047  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:11:37.211611  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:11:37.373170  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:11:37.694878  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:11:38.336386  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:11:39.618651  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:11:42.180759  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-128181 node add --alsologtostderr -v 5: (42.634985427s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (43.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-128181 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp testdata/cp-test.txt ha-128181:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1477139381/001/cp-test_ha-128181.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181:/home/docker/cp-test.txt ha-128181-m02:/home/docker/cp-test_ha-128181_ha-128181-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m02 "sudo cat /home/docker/cp-test_ha-128181_ha-128181-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181:/home/docker/cp-test.txt ha-128181-m03:/home/docker/cp-test_ha-128181_ha-128181-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m03 "sudo cat /home/docker/cp-test_ha-128181_ha-128181-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181:/home/docker/cp-test.txt ha-128181-m04:/home/docker/cp-test_ha-128181_ha-128181-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181 "sudo cat /home/docker/cp-test.txt"
E1025 09:11:47.303119  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m04 "sudo cat /home/docker/cp-test_ha-128181_ha-128181-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp testdata/cp-test.txt ha-128181-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1477139381/001/cp-test_ha-128181-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181-m02:/home/docker/cp-test.txt ha-128181:/home/docker/cp-test_ha-128181-m02_ha-128181.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181 "sudo cat /home/docker/cp-test_ha-128181-m02_ha-128181.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181-m02:/home/docker/cp-test.txt ha-128181-m03:/home/docker/cp-test_ha-128181-m02_ha-128181-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m03 "sudo cat /home/docker/cp-test_ha-128181-m02_ha-128181-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181-m02:/home/docker/cp-test.txt ha-128181-m04:/home/docker/cp-test_ha-128181-m02_ha-128181-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m04 "sudo cat /home/docker/cp-test_ha-128181-m02_ha-128181-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp testdata/cp-test.txt ha-128181-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1477139381/001/cp-test_ha-128181-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181-m03:/home/docker/cp-test.txt ha-128181:/home/docker/cp-test_ha-128181-m03_ha-128181.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181 "sudo cat /home/docker/cp-test_ha-128181-m03_ha-128181.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181-m03:/home/docker/cp-test.txt ha-128181-m02:/home/docker/cp-test_ha-128181-m03_ha-128181-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m02 "sudo cat /home/docker/cp-test_ha-128181-m03_ha-128181-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181-m03:/home/docker/cp-test.txt ha-128181-m04:/home/docker/cp-test_ha-128181-m03_ha-128181-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m04 "sudo cat /home/docker/cp-test_ha-128181-m03_ha-128181-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp testdata/cp-test.txt ha-128181-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1477139381/001/cp-test_ha-128181-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181-m04:/home/docker/cp-test.txt ha-128181:/home/docker/cp-test_ha-128181-m04_ha-128181.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181 "sudo cat /home/docker/cp-test_ha-128181-m04_ha-128181.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181-m04:/home/docker/cp-test.txt ha-128181-m02:/home/docker/cp-test_ha-128181-m04_ha-128181-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m02 "sudo cat /home/docker/cp-test_ha-128181-m04_ha-128181-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 cp ha-128181-m04:/home/docker/cp-test.txt ha-128181-m03:/home/docker/cp-test_ha-128181-m04_ha-128181-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 ssh -n ha-128181-m03 "sudo cat /home/docker/cp-test_ha-128181-m04_ha-128181-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (87.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 node stop m02 --alsologtostderr -v 5
E1025 09:11:57.545163  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:12:18.027010  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:12:58.989468  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-128181 node stop m02 --alsologtostderr -v 5: (1m26.794357882s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-128181 status --alsologtostderr -v 5: exit status 7 (533.321867ms)

                                                
                                                
-- stdout --
	ha-128181
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-128181-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-128181-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-128181-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:13:22.088733  117564 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:13:22.088905  117564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:13:22.088918  117564 out.go:374] Setting ErrFile to fd 2...
	I1025 09:13:22.088924  117564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:13:22.089177  117564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
	I1025 09:13:22.089358  117564 out.go:368] Setting JSON to false
	I1025 09:13:22.089388  117564 mustload.go:65] Loading cluster: ha-128181
	I1025 09:13:22.089524  117564 notify.go:220] Checking for updates...
	I1025 09:13:22.089831  117564 config.go:182] Loaded profile config "ha-128181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:13:22.089852  117564 status.go:174] checking status of ha-128181 ...
	I1025 09:13:22.091839  117564 status.go:371] ha-128181 host status = "Running" (err=<nil>)
	I1025 09:13:22.091858  117564 host.go:66] Checking if "ha-128181" exists ...
	I1025 09:13:22.095459  117564 main.go:141] libmachine: domain ha-128181 has defined MAC address 52:54:00:da:5a:6e in network mk-ha-128181
	I1025 09:13:22.096150  117564 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:5a:6e", ip: ""} in network mk-ha-128181: {Iface:virbr1 ExpiryTime:2025-10-25 10:07:41 +0000 UTC Type:0 Mac:52:54:00:da:5a:6e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-128181 Clientid:01:52:54:00:da:5a:6e}
	I1025 09:13:22.096197  117564 main.go:141] libmachine: domain ha-128181 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:5a:6e in network mk-ha-128181
	I1025 09:13:22.096396  117564 host.go:66] Checking if "ha-128181" exists ...
	I1025 09:13:22.096722  117564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:13:22.099846  117564 main.go:141] libmachine: domain ha-128181 has defined MAC address 52:54:00:da:5a:6e in network mk-ha-128181
	I1025 09:13:22.100530  117564 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:5a:6e", ip: ""} in network mk-ha-128181: {Iface:virbr1 ExpiryTime:2025-10-25 10:07:41 +0000 UTC Type:0 Mac:52:54:00:da:5a:6e Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-128181 Clientid:01:52:54:00:da:5a:6e}
	I1025 09:13:22.100566  117564 main.go:141] libmachine: domain ha-128181 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:5a:6e in network mk-ha-128181
	I1025 09:13:22.100905  117564 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/ha-128181/id_rsa Username:docker}
	I1025 09:13:22.196906  117564 ssh_runner.go:195] Run: systemctl --version
	I1025 09:13:22.204966  117564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:13:22.227165  117564 kubeconfig.go:125] found "ha-128181" server: "https://192.168.39.254:8443"
	I1025 09:13:22.227212  117564 api_server.go:166] Checking apiserver status ...
	I1025 09:13:22.227258  117564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:13:22.250629  117564 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1386/cgroup
	W1025 09:13:22.262396  117564 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1386/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:13:22.262461  117564 ssh_runner.go:195] Run: ls
	I1025 09:13:22.267376  117564 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1025 09:13:22.272237  117564 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1025 09:13:22.272272  117564 status.go:463] ha-128181 apiserver status = Running (err=<nil>)
	I1025 09:13:22.272287  117564 status.go:176] ha-128181 status: &{Name:ha-128181 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:13:22.272310  117564 status.go:174] checking status of ha-128181-m02 ...
	I1025 09:13:22.274271  117564 status.go:371] ha-128181-m02 host status = "Stopped" (err=<nil>)
	I1025 09:13:22.274300  117564 status.go:384] host is not running, skipping remaining checks
	I1025 09:13:22.274309  117564 status.go:176] ha-128181-m02 status: &{Name:ha-128181-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:13:22.274331  117564 status.go:174] checking status of ha-128181-m03 ...
	I1025 09:13:22.275820  117564 status.go:371] ha-128181-m03 host status = "Running" (err=<nil>)
	I1025 09:13:22.275841  117564 host.go:66] Checking if "ha-128181-m03" exists ...
	I1025 09:13:22.278527  117564 main.go:141] libmachine: domain ha-128181-m03 has defined MAC address 52:54:00:d9:9e:8b in network mk-ha-128181
	I1025 09:13:22.278975  117564 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:9e:8b", ip: ""} in network mk-ha-128181: {Iface:virbr1 ExpiryTime:2025-10-25 10:09:48 +0000 UTC Type:0 Mac:52:54:00:d9:9e:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-128181-m03 Clientid:01:52:54:00:d9:9e:8b}
	I1025 09:13:22.279002  117564 main.go:141] libmachine: domain ha-128181-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:d9:9e:8b in network mk-ha-128181
	I1025 09:13:22.279150  117564 host.go:66] Checking if "ha-128181-m03" exists ...
	I1025 09:13:22.279386  117564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:13:22.281701  117564 main.go:141] libmachine: domain ha-128181-m03 has defined MAC address 52:54:00:d9:9e:8b in network mk-ha-128181
	I1025 09:13:22.282115  117564 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:9e:8b", ip: ""} in network mk-ha-128181: {Iface:virbr1 ExpiryTime:2025-10-25 10:09:48 +0000 UTC Type:0 Mac:52:54:00:d9:9e:8b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-128181-m03 Clientid:01:52:54:00:d9:9e:8b}
	I1025 09:13:22.282139  117564 main.go:141] libmachine: domain ha-128181-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:d9:9e:8b in network mk-ha-128181
	I1025 09:13:22.282316  117564 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/ha-128181-m03/id_rsa Username:docker}
	I1025 09:13:22.375219  117564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:13:22.395816  117564 kubeconfig.go:125] found "ha-128181" server: "https://192.168.39.254:8443"
	I1025 09:13:22.395848  117564 api_server.go:166] Checking apiserver status ...
	I1025 09:13:22.395895  117564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:13:22.419527  117564 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1817/cgroup
	W1025 09:13:22.434402  117564 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1817/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:13:22.434476  117564 ssh_runner.go:195] Run: ls
	I1025 09:13:22.440403  117564 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1025 09:13:22.446611  117564 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1025 09:13:22.446642  117564 status.go:463] ha-128181-m03 apiserver status = Running (err=<nil>)
	I1025 09:13:22.446656  117564 status.go:176] ha-128181-m03 status: &{Name:ha-128181-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:13:22.446685  117564 status.go:174] checking status of ha-128181-m04 ...
	I1025 09:13:22.448230  117564 status.go:371] ha-128181-m04 host status = "Running" (err=<nil>)
	I1025 09:13:22.448247  117564 host.go:66] Checking if "ha-128181-m04" exists ...
	I1025 09:13:22.450653  117564 main.go:141] libmachine: domain ha-128181-m04 has defined MAC address 52:54:00:a1:d1:68 in network mk-ha-128181
	I1025 09:13:22.451079  117564 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:d1:68", ip: ""} in network mk-ha-128181: {Iface:virbr1 ExpiryTime:2025-10-25 10:11:16 +0000 UTC Type:0 Mac:52:54:00:a1:d1:68 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-128181-m04 Clientid:01:52:54:00:a1:d1:68}
	I1025 09:13:22.451106  117564 main.go:141] libmachine: domain ha-128181-m04 has defined IP address 192.168.39.56 and MAC address 52:54:00:a1:d1:68 in network mk-ha-128181
	I1025 09:13:22.451239  117564 host.go:66] Checking if "ha-128181-m04" exists ...
	I1025 09:13:22.451409  117564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:13:22.453968  117564 main.go:141] libmachine: domain ha-128181-m04 has defined MAC address 52:54:00:a1:d1:68 in network mk-ha-128181
	I1025 09:13:22.454424  117564 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a1:d1:68", ip: ""} in network mk-ha-128181: {Iface:virbr1 ExpiryTime:2025-10-25 10:11:16 +0000 UTC Type:0 Mac:52:54:00:a1:d1:68 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-128181-m04 Clientid:01:52:54:00:a1:d1:68}
	I1025 09:13:22.454456  117564 main.go:141] libmachine: domain ha-128181-m04 has defined IP address 192.168.39.56 and MAC address 52:54:00:a1:d1:68 in network mk-ha-128181
	I1025 09:13:22.454635  117564 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/ha-128181-m04/id_rsa Username:docker}
	I1025 09:13:22.540023  117564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:13:22.559654  117564 status.go:176] ha-128181-m04 status: &{Name:ha-128181-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (87.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 node start m02 --alsologtostderr -v 5
E1025 09:13:53.202728  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-128181 node start m02 --alsologtostderr -v 5: (35.671330496s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 stop --alsologtostderr -v 5
E1025 09:14:20.911427  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:16:37.048949  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:17:04.753519  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-128181 stop --alsologtostderr -v 5: (4m16.551344916s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 start --wait true --alsologtostderr -v 5
E1025 09:18:53.205514  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:20:16.271145  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-128181 start --wait true --alsologtostderr -v 5: (2m2.549810998s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-128181 node delete m03 --alsologtostderr -v 5: (17.941120383s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (249.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 stop --alsologtostderr -v 5
E1025 09:21:37.048925  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:23:53.206656  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-128181 stop --alsologtostderr -v 5: (4m9.709764194s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-128181 status --alsologtostderr -v 5: exit status 7 (67.021288ms)

                                                
                                                
-- stdout --
	ha-128181
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-128181-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-128181-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:24:48.620671  121216 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:24:48.620949  121216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:24:48.620958  121216 out.go:374] Setting ErrFile to fd 2...
	I1025 09:24:48.620963  121216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:24:48.621167  121216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
	I1025 09:24:48.621353  121216 out.go:368] Setting JSON to false
	I1025 09:24:48.621391  121216 mustload.go:65] Loading cluster: ha-128181
	I1025 09:24:48.621521  121216 notify.go:220] Checking for updates...
	I1025 09:24:48.621760  121216 config.go:182] Loaded profile config "ha-128181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:24:48.621790  121216 status.go:174] checking status of ha-128181 ...
	I1025 09:24:48.623797  121216 status.go:371] ha-128181 host status = "Stopped" (err=<nil>)
	I1025 09:24:48.623812  121216 status.go:384] host is not running, skipping remaining checks
	I1025 09:24:48.623817  121216 status.go:176] ha-128181 status: &{Name:ha-128181 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:24:48.623833  121216 status.go:174] checking status of ha-128181-m02 ...
	I1025 09:24:48.625165  121216 status.go:371] ha-128181-m02 host status = "Stopped" (err=<nil>)
	I1025 09:24:48.625179  121216 status.go:384] host is not running, skipping remaining checks
	I1025 09:24:48.625184  121216 status.go:176] ha-128181-m02 status: &{Name:ha-128181-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:24:48.625196  121216 status.go:174] checking status of ha-128181-m04 ...
	I1025 09:24:48.626420  121216 status.go:371] ha-128181-m04 host status = "Stopped" (err=<nil>)
	I1025 09:24:48.626433  121216 status.go:384] host is not running, skipping remaining checks
	I1025 09:24:48.626436  121216 status.go:176] ha-128181-m04 status: &{Name:ha-128181-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (249.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (91.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-128181 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m30.425449475s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (91.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (87.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 node add --control-plane --alsologtostderr -v 5
E1025 09:26:37.049202  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-128181 node add --control-plane --alsologtostderr -v 5: (1m26.935682395s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-128181 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (87.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.71s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-810599 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1025 09:28:00.117637  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-810599 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (56.795752083s)
--- PASS: TestJSONOutput/start/Command (56.80s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-810599 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-810599 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-810599 --output=json --user=testUser
E1025 09:28:53.204749  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-810599 --output=json --user=testUser: (6.873236554s)
--- PASS: TestJSONOutput/stop/Command (6.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-677546 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-677546 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.263975ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"516ba3ed-ff9d-4f24-a97f-c3b55adf6e6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-677546] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f88c07d-ab60-4ea3-a3b6-879a112b2de7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21794"}}
	{"specversion":"1.0","id":"aa01341e-b76e-49d4-83c9-d8cb666e77ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6dc14edd-5615-4af2-9ab8-e1b4fabfd7e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21794-103842/kubeconfig"}}
	{"specversion":"1.0","id":"c7b22039-ab3f-4a75-beb1-790411ae1a07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-103842/.minikube"}}
	{"specversion":"1.0","id":"f5ef8c89-9a5f-4632-a94a-b646b8f911b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3ba2f174-e27b-4126-a3d8-57165bf7ef8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8cf77036-5087-47a2-8612-03aa29852300","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-677546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-677546
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (84.78s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-087703 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-087703 --driver=kvm2  --container-runtime=crio: (39.754365372s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-089743 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-089743 --driver=kvm2  --container-runtime=crio: (42.350386563s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-087703
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-089743
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-089743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-089743
helpers_test.go:175: Cleaning up "first-087703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-087703
--- PASS: TestMinikubeProfile (84.78s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-936767 --memory=3072 --mount-string /tmp/TestMountStartserial657967207/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-936767 --memory=3072 --mount-string /tmp/TestMountStartserial657967207/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.297251307s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-936767 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-936767 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-956239 --memory=3072 --mount-string /tmp/TestMountStartserial657967207/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-956239 --memory=3072 --mount-string /tmp/TestMountStartserial657967207/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.952424693s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-956239 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-956239 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-936767 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-956239 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-956239 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-956239
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-956239: (1.369300319s)
--- PASS: TestMountStart/serial/Stop (1.37s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.24s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-956239
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-956239: (18.244608603s)
--- PASS: TestMountStart/serial/RestartStopped (19.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-956239 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-956239 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (98.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-530815 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1025 09:31:37.048850  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-530815 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m38.204576893s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (98.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-530815 -- rollout status deployment/busybox: (3.88946357s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- exec busybox-7b57f96db7-mxvhx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- exec busybox-7b57f96db7-v7d5d -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- exec busybox-7b57f96db7-mxvhx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- exec busybox-7b57f96db7-v7d5d -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- exec busybox-7b57f96db7-mxvhx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- exec busybox-7b57f96db7-v7d5d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- exec busybox-7b57f96db7-mxvhx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- exec busybox-7b57f96db7-mxvhx -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- exec busybox-7b57f96db7-v7d5d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-530815 -- exec busybox-7b57f96db7-v7d5d -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-530815 -v=5 --alsologtostderr
E1025 09:33:53.202859  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-530815 -v=5 --alsologtostderr: (45.898723153s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.36s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-530815 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 cp testdata/cp-test.txt multinode-530815:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 cp multinode-530815:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1467315343/001/cp-test_multinode-530815.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 cp multinode-530815:/home/docker/cp-test.txt multinode-530815-m02:/home/docker/cp-test_multinode-530815_multinode-530815-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815-m02 "sudo cat /home/docker/cp-test_multinode-530815_multinode-530815-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 cp multinode-530815:/home/docker/cp-test.txt multinode-530815-m03:/home/docker/cp-test_multinode-530815_multinode-530815-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815-m03 "sudo cat /home/docker/cp-test_multinode-530815_multinode-530815-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 cp testdata/cp-test.txt multinode-530815-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 cp multinode-530815-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1467315343/001/cp-test_multinode-530815-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 cp multinode-530815-m02:/home/docker/cp-test.txt multinode-530815:/home/docker/cp-test_multinode-530815-m02_multinode-530815.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815 "sudo cat /home/docker/cp-test_multinode-530815-m02_multinode-530815.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 cp multinode-530815-m02:/home/docker/cp-test.txt multinode-530815-m03:/home/docker/cp-test_multinode-530815-m02_multinode-530815-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815-m03 "sudo cat /home/docker/cp-test_multinode-530815-m02_multinode-530815-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 cp testdata/cp-test.txt multinode-530815-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 cp multinode-530815-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1467315343/001/cp-test_multinode-530815-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 cp multinode-530815-m03:/home/docker/cp-test.txt multinode-530815:/home/docker/cp-test_multinode-530815-m03_multinode-530815.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815 "sudo cat /home/docker/cp-test_multinode-530815-m03_multinode-530815.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 cp multinode-530815-m03:/home/docker/cp-test.txt multinode-530815-m02:/home/docker/cp-test_multinode-530815-m03_multinode-530815-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 ssh -n multinode-530815-m02 "sudo cat /home/docker/cp-test_multinode-530815-m03_multinode-530815-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-530815 node stop m03: (1.629354819s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-530815 status: exit status 7 (342.715054ms)

                                                
                                                
-- stdout --
	multinode-530815
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-530815-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-530815-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-530815 status --alsologtostderr: exit status 7 (351.420015ms)

                                                
                                                
-- stdout --
	multinode-530815
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-530815-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-530815-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:34:07.971216  126844 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:34:07.971566  126844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:34:07.971574  126844 out.go:374] Setting ErrFile to fd 2...
	I1025 09:34:07.971578  126844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:34:07.971795  126844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
	I1025 09:34:07.971972  126844 out.go:368] Setting JSON to false
	I1025 09:34:07.972006  126844 mustload.go:65] Loading cluster: multinode-530815
	I1025 09:34:07.972130  126844 notify.go:220] Checking for updates...
	I1025 09:34:07.972426  126844 config.go:182] Loaded profile config "multinode-530815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:34:07.972443  126844 status.go:174] checking status of multinode-530815 ...
	I1025 09:34:07.974827  126844 status.go:371] multinode-530815 host status = "Running" (err=<nil>)
	I1025 09:34:07.974866  126844 host.go:66] Checking if "multinode-530815" exists ...
	I1025 09:34:07.978029  126844 main.go:141] libmachine: domain multinode-530815 has defined MAC address 52:54:00:fc:1e:2e in network mk-multinode-530815
	I1025 09:34:07.978482  126844 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fc:1e:2e", ip: ""} in network mk-multinode-530815: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:43 +0000 UTC Type:0 Mac:52:54:00:fc:1e:2e Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-530815 Clientid:01:52:54:00:fc:1e:2e}
	I1025 09:34:07.978507  126844 main.go:141] libmachine: domain multinode-530815 has defined IP address 192.168.39.32 and MAC address 52:54:00:fc:1e:2e in network mk-multinode-530815
	I1025 09:34:07.978631  126844 host.go:66] Checking if "multinode-530815" exists ...
	I1025 09:34:07.978874  126844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:34:07.981051  126844 main.go:141] libmachine: domain multinode-530815 has defined MAC address 52:54:00:fc:1e:2e in network mk-multinode-530815
	I1025 09:34:07.981493  126844 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fc:1e:2e", ip: ""} in network mk-multinode-530815: {Iface:virbr1 ExpiryTime:2025-10-25 10:31:43 +0000 UTC Type:0 Mac:52:54:00:fc:1e:2e Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-530815 Clientid:01:52:54:00:fc:1e:2e}
	I1025 09:34:07.981522  126844 main.go:141] libmachine: domain multinode-530815 has defined IP address 192.168.39.32 and MAC address 52:54:00:fc:1e:2e in network mk-multinode-530815
	I1025 09:34:07.981739  126844 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/multinode-530815/id_rsa Username:docker}
	I1025 09:34:08.061594  126844 ssh_runner.go:195] Run: systemctl --version
	I1025 09:34:08.071483  126844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:34:08.094207  126844 kubeconfig.go:125] found "multinode-530815" server: "https://192.168.39.32:8443"
	I1025 09:34:08.094241  126844 api_server.go:166] Checking apiserver status ...
	I1025 09:34:08.094273  126844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:34:08.118298  126844 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1347/cgroup
	W1025 09:34:08.131387  126844 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1347/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:34:08.131458  126844 ssh_runner.go:195] Run: ls
	I1025 09:34:08.137205  126844 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I1025 09:34:08.147820  126844 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I1025 09:34:08.147855  126844 status.go:463] multinode-530815 apiserver status = Running (err=<nil>)
	I1025 09:34:08.147870  126844 status.go:176] multinode-530815 status: &{Name:multinode-530815 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:34:08.147892  126844 status.go:174] checking status of multinode-530815-m02 ...
	I1025 09:34:08.149796  126844 status.go:371] multinode-530815-m02 host status = "Running" (err=<nil>)
	I1025 09:34:08.149822  126844 host.go:66] Checking if "multinode-530815-m02" exists ...
	I1025 09:34:08.152706  126844 main.go:141] libmachine: domain multinode-530815-m02 has defined MAC address 52:54:00:94:69:2e in network mk-multinode-530815
	I1025 09:34:08.153239  126844 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:94:69:2e", ip: ""} in network mk-multinode-530815: {Iface:virbr1 ExpiryTime:2025-10-25 10:32:37 +0000 UTC Type:0 Mac:52:54:00:94:69:2e Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-530815-m02 Clientid:01:52:54:00:94:69:2e}
	I1025 09:34:08.153267  126844 main.go:141] libmachine: domain multinode-530815-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:94:69:2e in network mk-multinode-530815
	I1025 09:34:08.153430  126844 host.go:66] Checking if "multinode-530815-m02" exists ...
	I1025 09:34:08.153676  126844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:34:08.156406  126844 main.go:141] libmachine: domain multinode-530815-m02 has defined MAC address 52:54:00:94:69:2e in network mk-multinode-530815
	I1025 09:34:08.156850  126844 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:94:69:2e", ip: ""} in network mk-multinode-530815: {Iface:virbr1 ExpiryTime:2025-10-25 10:32:37 +0000 UTC Type:0 Mac:52:54:00:94:69:2e Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-530815-m02 Clientid:01:52:54:00:94:69:2e}
	I1025 09:34:08.156883  126844 main.go:141] libmachine: domain multinode-530815-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:94:69:2e in network mk-multinode-530815
	I1025 09:34:08.157030  126844 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21794-103842/.minikube/machines/multinode-530815-m02/id_rsa Username:docker}
	I1025 09:34:08.240630  126844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:34:08.257733  126844 status.go:176] multinode-530815-m02 status: &{Name:multinode-530815-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:34:08.257801  126844 status.go:174] checking status of multinode-530815-m03 ...
	I1025 09:34:08.259341  126844 status.go:371] multinode-530815-m03 host status = "Stopped" (err=<nil>)
	I1025 09:34:08.259360  126844 status.go:384] host is not running, skipping remaining checks
	I1025 09:34:08.259368  126844 status.go:176] multinode-530815-m03 status: &{Name:multinode-530815-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-530815 node start m03 -v=5 --alsologtostderr: (39.466865061s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (307.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-530815
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-530815
E1025 09:36:37.051063  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:56.275270  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-530815: (2m55.650933479s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-530815 --wait=true -v=5 --alsologtostderr
E1025 09:38:53.202634  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-530815 --wait=true -v=5 --alsologtostderr: (2m11.86705296s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-530815
--- PASS: TestMultiNode/serial/RestartKeepsNodes (307.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-530815 node delete m03: (2.089827266s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (166.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 stop
E1025 09:41:37.051868  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-530815 stop: (2m46.561871543s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-530815 status: exit status 7 (67.907889ms)

                                                
                                                
-- stdout --
	multinode-530815
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-530815-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-530815 status --alsologtostderr: exit status 7 (68.120549ms)

                                                
                                                
-- stdout --
	multinode-530815
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-530815-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:42:45.152611  129186 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:42:45.152905  129186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:42:45.152918  129186 out.go:374] Setting ErrFile to fd 2...
	I1025 09:42:45.152925  129186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:42:45.153142  129186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
	I1025 09:42:45.153342  129186 out.go:368] Setting JSON to false
	I1025 09:42:45.153381  129186 mustload.go:65] Loading cluster: multinode-530815
	I1025 09:42:45.153476  129186 notify.go:220] Checking for updates...
	I1025 09:42:45.153794  129186 config.go:182] Loaded profile config "multinode-530815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:42:45.153812  129186 status.go:174] checking status of multinode-530815 ...
	I1025 09:42:45.156347  129186 status.go:371] multinode-530815 host status = "Stopped" (err=<nil>)
	I1025 09:42:45.156371  129186 status.go:384] host is not running, skipping remaining checks
	I1025 09:42:45.156380  129186 status.go:176] multinode-530815 status: &{Name:multinode-530815 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:42:45.156403  129186 status.go:174] checking status of multinode-530815-m02 ...
	I1025 09:42:45.157993  129186 status.go:371] multinode-530815-m02 host status = "Stopped" (err=<nil>)
	I1025 09:42:45.158019  129186 status.go:384] host is not running, skipping remaining checks
	I1025 09:42:45.158026  129186 status.go:176] multinode-530815-m02 status: &{Name:multinode-530815-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (166.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (87.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-530815 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1025 09:43:53.203147  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-530815 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m26.574237475s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-530815 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (87.04s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-530815
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-530815-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-530815-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (83.473588ms)

                                                
                                                
-- stdout --
	* [multinode-530815-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-103842/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-103842/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-530815-m02' is duplicated with machine name 'multinode-530815-m02' in profile 'multinode-530815'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-530815-m03 --driver=kvm2  --container-runtime=crio
E1025 09:44:40.121896  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-530815-m03 --driver=kvm2  --container-runtime=crio: (37.558686961s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-530815
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-530815: exit status 80 (202.848033ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-530815 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-530815-m03 already exists in multinode-530815-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-530815-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.77s)

                                                
                                    
x
+
TestScheduledStopUnix (109.65s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-338204 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-338204 --memory=3072 --driver=kvm2  --container-runtime=crio: (37.98526683s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-338204 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-338204 -n scheduled-stop-338204
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-338204 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1025 09:47:48.214814  107766 retry.go:31] will retry after 111.766µs: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.215994  107766 retry.go:31] will retry after 112.589µs: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.217131  107766 retry.go:31] will retry after 224.365µs: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.218265  107766 retry.go:31] will retry after 217.456µs: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.219367  107766 retry.go:31] will retry after 731.848µs: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.220521  107766 retry.go:31] will retry after 790.742µs: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.221657  107766 retry.go:31] will retry after 768.445µs: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.222813  107766 retry.go:31] will retry after 1.97889ms: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.225011  107766 retry.go:31] will retry after 2.703657ms: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.228222  107766 retry.go:31] will retry after 2.757366ms: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.231500  107766 retry.go:31] will retry after 6.549415ms: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.238795  107766 retry.go:31] will retry after 7.393861ms: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.247052  107766 retry.go:31] will retry after 15.804646ms: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.263326  107766 retry.go:31] will retry after 13.879279ms: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.277650  107766 retry.go:31] will retry after 35.548529ms: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
I1025 09:47:48.313964  107766 retry.go:31] will retry after 43.580899ms: open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/scheduled-stop-338204/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-338204 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-338204 -n scheduled-stop-338204
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-338204
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-338204 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1025 09:48:53.207941  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-338204
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-338204: exit status 7 (64.229169ms)

                                                
                                                
-- stdout --
	scheduled-stop-338204
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-338204 -n scheduled-stop-338204
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-338204 -n scheduled-stop-338204: exit status 7 (63.157102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-338204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-338204
--- PASS: TestScheduledStopUnix (109.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (150.3s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1329185196 start -p running-upgrade-083990 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1329185196 start -p running-upgrade-083990 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m40.694180775s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-083990 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-083990 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.854292564s)
helpers_test.go:175: Cleaning up "running-upgrade-083990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-083990
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-083990: (1.491727965s)
--- PASS: TestRunningBinaryUpgrade (150.30s)

                                                
                                    
x
+
TestKubernetesUpgrade (202.15s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-258418 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-258418 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.122953406s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-258418
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-258418: (2.114251912s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-258418 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-258418 status --format={{.Host}}: exit status 7 (70.819915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-258418 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-258418 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.448157577s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-258418 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-258418 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-258418 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (95.402437ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-258418] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-103842/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-103842/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-258418
	    minikube start -p kubernetes-upgrade-258418 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2584182 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-258418 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-258418 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-258418 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.217801086s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-258418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-258418
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-258418: (1.005232269s)
--- PASS: TestKubernetesUpgrade (202.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-930549 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-930549 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (97.255782ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-930549] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-103842/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-103842/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (84.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-930549 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-930549 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m24.59522431s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-930549 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (84.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (106.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3709242563 start -p stopped-upgrade-685866 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3709242563 start -p stopped-upgrade-685866 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m2.532894971s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3709242563 -p stopped-upgrade-685866 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3709242563 -p stopped-upgrade-685866 stop: (1.84410619s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-685866 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-685866 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.550598665s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (106.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (45.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-930549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-930549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.153775504s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-930549 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-930549 status -o json: exit status 2 (226.625051ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-930549","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-930549
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (45.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (38.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-930549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-930549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (38.644299134s)
--- PASS: TestNoKubernetes/serial/Start (38.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-685866
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-685866: (1.071107307s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-930549 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-930549 "sudo systemctl is-active --quiet service kubelet": exit status 1 (177.697663ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-930549
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-930549: (1.359278984s)
--- PASS: TestNoKubernetes/serial/Stop (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (41.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-930549 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-930549 --driver=kvm2  --container-runtime=crio: (41.969789934s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (41.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-173840 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-173840 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (116.594295ms)

                                                
                                                
-- stdout --
	* [false-173840] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-103842/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-103842/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:51:50.845640  135180 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:51:50.845919  135180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:51:50.845929  135180 out.go:374] Setting ErrFile to fd 2...
	I1025 09:51:50.845934  135180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:51:50.846169  135180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-103842/.minikube/bin
	I1025 09:51:50.846676  135180 out.go:368] Setting JSON to false
	I1025 09:51:50.847671  135180 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5652,"bootTime":1761380259,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:51:50.847796  135180 start.go:141] virtualization: kvm guest
	I1025 09:51:50.849717  135180 out.go:179] * [false-173840] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:51:50.850921  135180 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:51:50.850988  135180 notify.go:220] Checking for updates...
	I1025 09:51:50.853570  135180 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:51:50.854894  135180 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-103842/kubeconfig
	I1025 09:51:50.856151  135180 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-103842/.minikube
	I1025 09:51:50.857475  135180 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:51:50.858799  135180 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:51:50.861241  135180 config.go:182] Loaded profile config "NoKubernetes-930549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1025 09:51:50.861397  135180 config.go:182] Loaded profile config "force-systemd-flag-341224": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:51:50.861506  135180 config.go:182] Loaded profile config "kubernetes-upgrade-258418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:51:50.861796  135180 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:51:50.895006  135180 out.go:179] * Using the kvm2 driver based on user configuration
	I1025 09:51:50.896157  135180 start.go:305] selected driver: kvm2
	I1025 09:51:50.896173  135180 start.go:925] validating driver "kvm2" against <nil>
	I1025 09:51:50.896186  135180 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:51:50.898297  135180 out.go:203] 
	W1025 09:51:50.899525  135180 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1025 09:51:50.900875  135180 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-173840 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-173840

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-173840

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-173840

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-173840

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-173840

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-173840

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-173840

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-173840

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-173840

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-173840

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-173840

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-173840" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-173840" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21794-103842/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:51:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.82:8443
name: kubernetes-upgrade-258418
contexts:
- context:
cluster: kubernetes-upgrade-258418
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:51:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-258418
name: kubernetes-upgrade-258418
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-258418
user:
client-certificate: /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/kubernetes-upgrade-258418/client.crt
client-key: /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/kubernetes-upgrade-258418/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-173840

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-173840"

                                                
                                                
----------------------- debugLogs end: false-173840 [took: 3.372648283s] --------------------------------
helpers_test.go:175: Cleaning up "false-173840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-173840
--- PASS: TestNetworkPlugins/group/false (3.67s)

                                                
                                    
x
+
TestPause/serial/Start (83.59s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-946519 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-946519 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m23.592953748s)
--- PASS: TestPause/serial/Start (83.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-930549 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-930549 "sudo systemctl is-active --quiet service kubelet": exit status 1 (181.40609ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (102.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-994847 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1025 09:53:36.279412  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-994847 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m42.103376419s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (102.10s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (55.19s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-946519 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1025 09:53:53.203426  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-946519 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.163273829s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (55.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (80.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-096917 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-096917 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m20.019489981s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (80.02s)

                                                
                                    
x
+
TestPause/serial/Pause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-946519 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-946519 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-946519 --output=json --layout=cluster: exit status 2 (234.883048ms)

                                                
                                                
-- stdout --
	{"Name":"pause-946519","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-946519","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-946519 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.9s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-946519 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.90s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.11s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-946519 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-946519 --alsologtostderr -v=5: (1.105670934s)
--- PASS: TestPause/serial/DeletePaused (1.11s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.65s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-711994 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-711994 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (51.489106012s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-994847 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4a451d9b-34a8-4fe3-9300-07ad89428e12] Pending
helpers_test.go:352: "busybox" [4a451d9b-34a8-4fe3-9300-07ad89428e12] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4a451d9b-34a8-4fe3-9300-07ad89428e12] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004680593s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-994847 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-994847 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-994847 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.229627325s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-994847 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (86.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-994847 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-994847 --alsologtostderr -v=3: (1m26.155227805s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (86.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-096917 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [639ec405-3f68-4982-8728-104fff75af68] Pending
helpers_test.go:352: "busybox" [639ec405-3f68-4982-8728-104fff75af68] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [639ec405-3f68-4982-8728-104fff75af68] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004235337s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-096917 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-711994 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f4c0ad4f-c00e-4132-9b3a-0b0358b26904] Pending
helpers_test.go:352: "busybox" [f4c0ad4f-c00e-4132-9b3a-0b0358b26904] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f4c0ad4f-c00e-4132-9b3a-0b0358b26904] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003696298s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-711994 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-096917 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-096917 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (74.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-096917 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-096917 --alsologtostderr -v=3: (1m14.656233337s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (74.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-711994 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-711994 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (77.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-711994 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-711994 --alsologtostderr -v=3: (1m17.812749127s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (77.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-994847 -n old-k8s-version-994847
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-994847 -n old-k8s-version-994847: exit status 7 (68.147005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-994847 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-994847 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1025 09:56:37.049269  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-994847 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (44.079889745s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-994847 -n old-k8s-version-994847
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-096917 -n no-preload-096917
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-096917 -n no-preload-096917: exit status 7 (70.017011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-096917 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (65.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-096917 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-096917 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m5.052523085s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-096917 -n no-preload-096917
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (65.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-711994 -n embed-certs-711994
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-711994 -n embed-certs-711994: exit status 7 (86.384941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-711994 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (57.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-711994 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-711994 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (57.173376019s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-711994 -n embed-certs-711994
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (57.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-058562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-058562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.749006777s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gtwpm" [bc0afbe5-e43f-404a-8854-aba96ab9e122] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gtwpm" [bc0afbe5-e43f-404a-8854-aba96ab9e122] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.004980301s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gtwpm" [bc0afbe5-e43f-404a-8854-aba96ab9e122] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004307339s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-994847 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-994847 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-994847 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-994847 --alsologtostderr -v=1: (1.445423651s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-994847 -n old-k8s-version-994847
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-994847 -n old-k8s-version-994847: exit status 2 (263.362129ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-994847 -n old-k8s-version-994847
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-994847 -n old-k8s-version-994847: exit status 2 (252.96751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-994847 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-994847 --alsologtostderr -v=1: (1.372847405s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-994847 -n old-k8s-version-994847
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-994847 -n old-k8s-version-994847
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-157170 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-157170 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m0.072989821s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-krrpg" [f05debc7-a492-43fc-a73b-aef2caba84e8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-krrpg" [f05debc7-a492-43fc-a73b-aef2caba84e8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.008245786s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vq74g" [bcb69442-a788-47c4-8e68-95016de34e0d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vq74g" [bcb69442-a788-47c4-8e68-95016de34e0d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.005888747s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-krrpg" [f05debc7-a492-43fc-a73b-aef2caba84e8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00977223s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-096917 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-096917 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-096917 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-096917 --alsologtostderr -v=1: (1.03175149s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-096917 -n no-preload-096917
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-096917 -n no-preload-096917: exit status 2 (274.622989ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-096917 -n no-preload-096917
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-096917 -n no-preload-096917: exit status 2 (274.269049ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-096917 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-096917 -n no-preload-096917
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-096917 -n no-preload-096917
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vq74g" [bcb69442-a788-47c4-8e68-95016de34e0d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004452782s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-711994 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-173840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-173840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (59.363410517s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-711994 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-711994 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-711994 -n embed-certs-711994
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-711994 -n embed-certs-711994: exit status 2 (276.696092ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-711994 -n embed-certs-711994
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-711994 -n embed-certs-711994: exit status 2 (264.541544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-711994 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-711994 -n embed-certs-711994
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-711994 -n embed-certs-711994
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-173840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-173840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m14.61230188s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-058562 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1b475c93-4445-4c94-a6e4-8a1f626a0d59] Pending
helpers_test.go:352: "busybox" [1b475c93-4445-4c94-a6e4-8a1f626a0d59] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1b475c93-4445-4c94-a6e4-8a1f626a0d59] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00544437s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-058562 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-157170 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-157170 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.057211237s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-157170 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-157170 --alsologtostderr -v=3: (7.95680709s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-058562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-058562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.156217262s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-058562 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (83.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-058562 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-058562 --alsologtostderr -v=3: (1m23.581374954s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (83.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-157170 -n newest-cni-157170
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-157170 -n newest-cni-157170: exit status 7 (75.960031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-157170 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (47.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-157170 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 09:58:53.203158  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/addons-887867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-157170 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (47.744285247s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-157170 -n newest-cni-157170
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (47.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-173840 "pgrep -a kubelet"
I1025 09:59:22.892538  107766 config.go:182] Loaded profile config "auto-173840": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-173840 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-29nzb" [94cbb3ba-8a32-45d7-b0e4-269cf9e61821] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-29nzb" [94cbb3ba-8a32-45d7-b0e4-269cf9e61821] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004125962s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-173840 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-173840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-173840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-157170 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-157170 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-157170 -n newest-cni-157170
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-157170 -n newest-cni-157170: exit status 2 (214.398359ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-157170 -n newest-cni-157170
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-157170 -n newest-cni-157170: exit status 2 (216.290701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-157170 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-157170 -n newest-cni-157170
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-157170 -n newest-cni-157170
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (94.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-173840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-173840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m34.840868409s)
--- PASS: TestNetworkPlugins/group/calico/Start (94.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-79cxd" [20b7c97a-ce9c-486d-b14e-9fcad3bee2bc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004671972s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (91.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-173840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1025 09:59:50.668188  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/old-k8s-version-994847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:59:50.674810  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/old-k8s-version-994847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:59:50.686200  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/old-k8s-version-994847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:59:50.707688  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/old-k8s-version-994847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:59:50.749406  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/old-k8s-version-994847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:59:50.831749  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/old-k8s-version-994847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:59:50.993916  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/old-k8s-version-994847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:59:51.315527  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/old-k8s-version-994847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:59:51.957657  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/old-k8s-version-994847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-173840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m31.65857335s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (91.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-173840 "pgrep -a kubelet"
I1025 09:59:52.185662  107766 config.go:182] Loaded profile config "kindnet-173840": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-173840 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hbtz4" [ee11da40-e7d9-4850-9dde-54e508029356] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 09:59:53.239468  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/old-k8s-version-994847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:59:55.801058  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/old-k8s-version-994847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-hbtz4" [ee11da40-e7d9-4850-9dde-54e508029356] Running
E1025 10:00:00.922981  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/old-k8s-version-994847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004048925s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-173840 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-173840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-173840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-058562 -n default-k8s-diff-port-058562
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-058562 -n default-k8s-diff-port-058562: exit status 7 (81.831318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-058562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (65.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-058562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-058562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m4.716941492s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-058562 -n default-k8s-diff-port-058562
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (65.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-173840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1025 10:00:26.478477  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:00:26.484935  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:00:26.496453  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:00:26.518022  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:00:26.559496  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:00:26.641470  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:00:26.803523  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:00:27.125192  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:00:27.767582  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:00:29.049489  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:00:31.611371  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:00:31.646575  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/old-k8s-version-994847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:00:36.733460  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:00:46.974900  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:01:07.457425  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:01:12.608903  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/old-k8s-version-994847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-173840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m27.961331497s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bc2nd" [075f09f0-11e0-4a81-a88b-04d4cd02cc7e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005082188s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-lgs7q" [07d153e2-9067-4a7c-8881-dce44a918c96] Running
E1025 10:01:20.123313  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/functional-494713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004399683s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-173840 "pgrep -a kubelet"
I1025 10:01:22.369201  107766 config.go:182] Loaded profile config "custom-flannel-173840": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-173840 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-n5s4q" [c5fb7b6f-3ad9-4784-b710-81be10d82b3c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-n5s4q" [c5fb7b6f-3ad9-4784-b710-81be10d82b3c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005733048s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bc2nd" [075f09f0-11e0-4a81-a88b-04d4cd02cc7e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005605133s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-058562 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-173840 "pgrep -a kubelet"
I1025 10:01:24.640716  107766 config.go:182] Loaded profile config "calico-173840": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-173840 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vlvkn" [a86ff1fb-a14e-48f1-920a-91f58dcf0dc5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vlvkn" [a86ff1fb-a14e-48f1-920a-91f58dcf0dc5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005260941s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-058562 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-058562 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-058562 -n default-k8s-diff-port-058562
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-058562 -n default-k8s-diff-port-058562: exit status 2 (267.62203ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-058562 -n default-k8s-diff-port-058562
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-058562 -n default-k8s-diff-port-058562: exit status 2 (251.772344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-058562 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-058562 -n default-k8s-diff-port-058562
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-058562 -n default-k8s-diff-port-058562
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (72.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-173840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-173840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m12.668684641s)
--- PASS: TestNetworkPlugins/group/flannel/Start (72.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-173840 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-173840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-173840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-173840 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-173840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-173840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-173840 "pgrep -a kubelet"
I1025 10:01:47.507412  107766 config.go:182] Loaded profile config "enable-default-cni-173840": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-173840 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lqv6p" [4ef6c622-3131-4cc6-97f2-b54f060d4e0a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 10:01:48.420451  107766 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/no-preload-096917/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-lqv6p" [4ef6c622-3131-4cc6-97f2-b54f060d4e0a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005244764s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (56.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-173840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-173840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (56.129644113s)
--- PASS: TestNetworkPlugins/group/bridge/Start (56.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-173840 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-173840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-173840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-v7cgn" [7c64f3ec-9286-42f3-ae0b-9edc2b4670bc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005587518s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-173840 "pgrep -a kubelet"
I1025 10:02:48.191397  107766 config.go:182] Loaded profile config "bridge-173840": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-173840 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-96r8g" [a14106e2-ed90-4084-9727-32672ca7f520] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-96r8g" [a14106e2-ed90-4084-9727-32672ca7f520] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003365064s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-173840 "pgrep -a kubelet"
I1025 10:02:51.233154  107766 config.go:182] Loaded profile config "flannel-173840": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-173840 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nv2gn" [6af4197d-f9ae-4d0c-b446-0c3e502e93d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nv2gn" [6af4197d-f9ae-4d0c-b446-0c3e502e93d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004339645s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-173840 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-173840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-173840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-173840 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-173840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-173840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    

Test skip (40/329)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.33
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.02
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
265 TestStartStop/group/disable-driver-mounts 0.19
270 TestNetworkPlugins/group/kubenet 3.57
282 TestNetworkPlugins/group/cilium 4.43
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-887867 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-890260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-890260
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-173840 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-173840

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-173840

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-173840

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-173840

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-173840

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-173840

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-173840

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-173840

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-173840

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-173840

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-173840

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-173840" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-173840" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21794-103842/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:51:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.82:8443
name: kubernetes-upgrade-258418
contexts:
- context:
cluster: kubernetes-upgrade-258418
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:51:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-258418
name: kubernetes-upgrade-258418
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-258418
user:
client-certificate: /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/kubernetes-upgrade-258418/client.crt
client-key: /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/kubernetes-upgrade-258418/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-173840

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-173840"

                                                
                                                
----------------------- debugLogs end: kubenet-173840 [took: 3.386923301s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-173840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-173840
--- SKIP: TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-173840 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-173840" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21794-103842/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:51:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.82:8443
name: kubernetes-upgrade-258418
contexts:
- context:
cluster: kubernetes-upgrade-258418
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:51:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-258418
name: kubernetes-upgrade-258418
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-258418
user:
client-certificate: /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/kubernetes-upgrade-258418/client.crt
client-key: /home/jenkins/minikube-integration/21794-103842/.minikube/profiles/kubernetes-upgrade-258418/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-173840

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-173840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-173840"

                                                
                                                
----------------------- debugLogs end: cilium-173840 [took: 4.234768085s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-173840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-173840
--- SKIP: TestNetworkPlugins/group/cilium (4.43s)

                                                
                                    
Copied to clipboard