Test Report: KVM_Linux_crio 21772

                    
                      32e66bacf90aad56df50495b30e504a3036ca148:2025-10-26:42070
                    
                

Test fail (2/329)

Order failed test Duration
37 TestAddons/parallel/Ingress 158.41
243 TestPreload 153.39
x
+
TestAddons/parallel/Ingress (158.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-465751 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-465751 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-465751 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [3f70c607-ec3d-4882-bc7f-844468c63e6f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [3f70c607-ec3d-4882-bc7f-844468c63e6f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003244227s
I1026 07:52:19.550478   13321 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-465751 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.964667706s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-465751 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.128
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-465751 -n addons-465751
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-465751 logs -n 25: (1.244115243s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-666462                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-666462 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ start   │ --download-only -p binary-mirror-183743 --alsologtostderr --binary-mirror http://127.0.0.1:42285 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-183743 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	│ delete  │ -p binary-mirror-183743                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-183743 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ addons  │ disable dashboard -p addons-465751                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	│ addons  │ enable dashboard -p addons-465751                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	│ start   │ -p addons-465751 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:51 UTC │
	│ addons  │ addons-465751 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:51 UTC │ 26 Oct 25 07:51 UTC │
	│ addons  │ addons-465751 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:51 UTC │ 26 Oct 25 07:51 UTC │
	│ addons  │ enable headlamp -p addons-465751 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:51 UTC │ 26 Oct 25 07:51 UTC │
	│ addons  │ addons-465751 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:51 UTC │ 26 Oct 25 07:51 UTC │
	│ ssh     │ addons-465751 ssh cat /opt/local-path-provisioner/pvc-331c72ac-cdbf-4634-9ec1-6085c75e794e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:51 UTC │ 26 Oct 25 07:51 UTC │
	│ addons  │ addons-465751 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:51 UTC │ 26 Oct 25 07:52 UTC │
	│ addons  │ addons-465751 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:51 UTC │ 26 Oct 25 07:52 UTC │
	│ ip      │ addons-465751 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:51 UTC │ 26 Oct 25 07:51 UTC │
	│ addons  │ addons-465751 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:51 UTC │ 26 Oct 25 07:52 UTC │
	│ addons  │ addons-465751 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:52 UTC │ 26 Oct 25 07:52 UTC │
	│ addons  │ addons-465751 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:52 UTC │ 26 Oct 25 07:52 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-465751                                                                                                                                                                                                                                                                                                                                                                                         │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:52 UTC │ 26 Oct 25 07:52 UTC │
	│ addons  │ addons-465751 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:52 UTC │ 26 Oct 25 07:52 UTC │
	│ addons  │ addons-465751 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:52 UTC │ 26 Oct 25 07:52 UTC │
	│ ssh     │ addons-465751 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:52 UTC │                     │
	│ addons  │ addons-465751 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:52 UTC │ 26 Oct 25 07:52 UTC │
	│ addons  │ addons-465751 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:52 UTC │ 26 Oct 25 07:52 UTC │
	│ addons  │ addons-465751 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:52 UTC │ 26 Oct 25 07:52 UTC │
	│ ip      │ addons-465751 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-465751        │ jenkins │ v1.37.0 │ 26 Oct 25 07:54 UTC │ 26 Oct 25 07:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 07:47:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 07:47:58.721151   14008 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:47:58.721410   14008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:47:58.721420   14008 out.go:374] Setting ErrFile to fd 2...
	I1026 07:47:58.721425   14008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:47:58.721619   14008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
	I1026 07:47:58.722148   14008 out.go:368] Setting JSON to false
	I1026 07:47:58.722918   14008 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1823,"bootTime":1761463056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 07:47:58.723003   14008 start.go:141] virtualization: kvm guest
	I1026 07:47:58.724791   14008 out.go:179] * [addons-465751] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 07:47:58.725979   14008 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 07:47:58.726017   14008 notify.go:220] Checking for updates...
	I1026 07:47:58.728210   14008 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 07:47:58.729441   14008 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9405/kubeconfig
	I1026 07:47:58.730689   14008 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9405/.minikube
	I1026 07:47:58.731886   14008 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 07:47:58.733139   14008 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 07:47:58.734479   14008 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 07:47:58.764050   14008 out.go:179] * Using the kvm2 driver based on user configuration
	I1026 07:47:58.765157   14008 start.go:305] selected driver: kvm2
	I1026 07:47:58.765170   14008 start.go:925] validating driver "kvm2" against <nil>
	I1026 07:47:58.765180   14008 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 07:47:58.765871   14008 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 07:47:58.766099   14008 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 07:47:58.766123   14008 cni.go:84] Creating CNI manager for ""
	I1026 07:47:58.766161   14008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 07:47:58.766167   14008 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 07:47:58.766205   14008 start.go:349] cluster config:
	{Name:addons-465751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-465751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1026 07:47:58.766300   14008 iso.go:125] acquiring lock: {Name:mk96f67d8329fb7692bdfa7d5182ebbf9e1ba018 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 07:47:58.768429   14008 out.go:179] * Starting "addons-465751" primary control-plane node in "addons-465751" cluster
	I1026 07:47:58.769417   14008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 07:47:58.769444   14008 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9405/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 07:47:58.769450   14008 cache.go:58] Caching tarball of preloaded images
	I1026 07:47:58.769519   14008 preload.go:233] Found /home/jenkins/minikube-integration/21772-9405/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 07:47:58.769531   14008 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 07:47:58.769807   14008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/config.json ...
	I1026 07:47:58.769827   14008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/config.json: {Name:mka15f4a257095f68fe1b5d8a63686d466825d15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:58.769958   14008 start.go:360] acquireMachinesLock for addons-465751: {Name:mk311ee0c6906dab6c982970197b91c6534b0fc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 07:47:58.770012   14008 start.go:364] duration metric: took 41.601µs to acquireMachinesLock for "addons-465751"
	I1026 07:47:58.770029   14008 start.go:93] Provisioning new machine with config: &{Name:addons-465751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-465751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 07:47:58.770072   14008 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 07:47:58.771517   14008 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1026 07:47:58.771675   14008 start.go:159] libmachine.API.Create for "addons-465751" (driver="kvm2")
	I1026 07:47:58.771700   14008 client.go:168] LocalClient.Create starting
	I1026 07:47:58.771772   14008 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca.pem
	I1026 07:47:58.982343   14008 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/cert.pem
	I1026 07:47:59.146327   14008 main.go:141] libmachine: creating domain...
	I1026 07:47:59.146349   14008 main.go:141] libmachine: creating network...
	I1026 07:47:59.147723   14008 main.go:141] libmachine: found existing default network
	I1026 07:47:59.147961   14008 main.go:141] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1026 07:47:59.148529   14008 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dfa880}
	I1026 07:47:59.148608   14008 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-465751</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1026 07:47:59.154275   14008 main.go:141] libmachine: creating private network mk-addons-465751 192.168.39.0/24...
	I1026 07:47:59.259285   14008 main.go:141] libmachine: private network mk-addons-465751 192.168.39.0/24 created
	I1026 07:47:59.259707   14008 main.go:141] libmachine: <network>
	  <name>mk-addons-465751</name>
	  <uuid>ef5959dd-3839-43ee-ab02-b82abb5da89d</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:0b:81:6e'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1026 07:47:59.259745   14008 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751 ...
	I1026 07:47:59.259776   14008 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21772-9405/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1026 07:47:59.259786   14008 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21772-9405/.minikube
	I1026 07:47:59.259864   14008 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21772-9405/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21772-9405/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1026 07:47:59.511767   14008 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa...
	I1026 07:47:59.613826   14008 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/addons-465751.rawdisk...
	I1026 07:47:59.613867   14008 main.go:141] libmachine: Writing magic tar header
	I1026 07:47:59.613897   14008 main.go:141] libmachine: Writing SSH key tar header
	I1026 07:47:59.613968   14008 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751 ...
	I1026 07:47:59.614028   14008 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751
	I1026 07:47:59.614045   14008 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751 (perms=drwx------)
	I1026 07:47:59.614062   14008 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21772-9405/.minikube/machines
	I1026 07:47:59.614073   14008 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21772-9405/.minikube/machines (perms=drwxr-xr-x)
	I1026 07:47:59.614099   14008 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21772-9405/.minikube
	I1026 07:47:59.614109   14008 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21772-9405/.minikube (perms=drwxr-xr-x)
	I1026 07:47:59.614120   14008 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21772-9405
	I1026 07:47:59.614128   14008 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21772-9405 (perms=drwxrwxr-x)
	I1026 07:47:59.614141   14008 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1026 07:47:59.614151   14008 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 07:47:59.614162   14008 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1026 07:47:59.614170   14008 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 07:47:59.614181   14008 main.go:141] libmachine: checking permissions on dir: /home
	I1026 07:47:59.614189   14008 main.go:141] libmachine: skipping /home - not owner
	I1026 07:47:59.614193   14008 main.go:141] libmachine: defining domain...
	I1026 07:47:59.615275   14008 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-465751</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/addons-465751.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-465751'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1026 07:47:59.680286   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:7d:15:6a in network default
	I1026 07:47:59.681002   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:47:59.681030   14008 main.go:141] libmachine: starting domain...
	I1026 07:47:59.681035   14008 main.go:141] libmachine: ensuring networks are active...
	I1026 07:47:59.681853   14008 main.go:141] libmachine: Ensuring network default is active
	I1026 07:47:59.682205   14008 main.go:141] libmachine: Ensuring network mk-addons-465751 is active
	I1026 07:47:59.682744   14008 main.go:141] libmachine: getting domain XML...
	I1026 07:47:59.683932   14008 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-465751</name>
	  <uuid>4fe18ab7-3dd9-4cb7-87d0-1cbb1d006e14</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/addons-465751.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:70:d3:cf'/>
	      <source network='mk-addons-465751'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:7d:15:6a'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1026 07:48:01.052912   14008 main.go:141] libmachine: waiting for domain to start...
	I1026 07:48:01.054112   14008 main.go:141] libmachine: domain is now running
	I1026 07:48:01.054128   14008 main.go:141] libmachine: waiting for IP...
	I1026 07:48:01.054744   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:01.055142   14008 main.go:141] libmachine: no network interface addresses found for domain addons-465751 (source=lease)
	I1026 07:48:01.055153   14008 main.go:141] libmachine: trying to list again with source=arp
	I1026 07:48:01.055445   14008 main.go:141] libmachine: unable to find current IP address of domain addons-465751 in network mk-addons-465751 (interfaces detected: [])
	I1026 07:48:01.055488   14008 retry.go:31] will retry after 206.305809ms: waiting for domain to come up
	I1026 07:48:01.263832   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:01.264340   14008 main.go:141] libmachine: no network interface addresses found for domain addons-465751 (source=lease)
	I1026 07:48:01.264358   14008 main.go:141] libmachine: trying to list again with source=arp
	I1026 07:48:01.264628   14008 main.go:141] libmachine: unable to find current IP address of domain addons-465751 in network mk-addons-465751 (interfaces detected: [])
	I1026 07:48:01.264663   14008 retry.go:31] will retry after 328.318973ms: waiting for domain to come up
	I1026 07:48:01.594108   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:01.594669   14008 main.go:141] libmachine: no network interface addresses found for domain addons-465751 (source=lease)
	I1026 07:48:01.594697   14008 main.go:141] libmachine: trying to list again with source=arp
	I1026 07:48:01.595019   14008 main.go:141] libmachine: unable to find current IP address of domain addons-465751 in network mk-addons-465751 (interfaces detected: [])
	I1026 07:48:01.595054   14008 retry.go:31] will retry after 324.590536ms: waiting for domain to come up
	I1026 07:48:01.921602   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:01.922251   14008 main.go:141] libmachine: no network interface addresses found for domain addons-465751 (source=lease)
	I1026 07:48:01.922265   14008 main.go:141] libmachine: trying to list again with source=arp
	I1026 07:48:01.922581   14008 main.go:141] libmachine: unable to find current IP address of domain addons-465751 in network mk-addons-465751 (interfaces detected: [])
	I1026 07:48:01.922610   14008 retry.go:31] will retry after 379.144659ms: waiting for domain to come up
	I1026 07:48:02.303107   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:02.303647   14008 main.go:141] libmachine: no network interface addresses found for domain addons-465751 (source=lease)
	I1026 07:48:02.303664   14008 main.go:141] libmachine: trying to list again with source=arp
	I1026 07:48:02.304003   14008 main.go:141] libmachine: unable to find current IP address of domain addons-465751 in network mk-addons-465751 (interfaces detected: [])
	I1026 07:48:02.304048   14008 retry.go:31] will retry after 556.359285ms: waiting for domain to come up
	I1026 07:48:02.861665   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:02.862227   14008 main.go:141] libmachine: no network interface addresses found for domain addons-465751 (source=lease)
	I1026 07:48:02.862242   14008 main.go:141] libmachine: trying to list again with source=arp
	I1026 07:48:02.862527   14008 main.go:141] libmachine: unable to find current IP address of domain addons-465751 in network mk-addons-465751 (interfaces detected: [])
	I1026 07:48:02.862570   14008 retry.go:31] will retry after 631.756933ms: waiting for domain to come up
	I1026 07:48:03.496225   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:03.496826   14008 main.go:141] libmachine: no network interface addresses found for domain addons-465751 (source=lease)
	I1026 07:48:03.496848   14008 main.go:141] libmachine: trying to list again with source=arp
	I1026 07:48:03.497184   14008 main.go:141] libmachine: unable to find current IP address of domain addons-465751 in network mk-addons-465751 (interfaces detected: [])
	I1026 07:48:03.497217   14008 retry.go:31] will retry after 1.088296472s: waiting for domain to come up
	I1026 07:48:04.586618   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:04.587220   14008 main.go:141] libmachine: no network interface addresses found for domain addons-465751 (source=lease)
	I1026 07:48:04.587238   14008 main.go:141] libmachine: trying to list again with source=arp
	I1026 07:48:04.587529   14008 main.go:141] libmachine: unable to find current IP address of domain addons-465751 in network mk-addons-465751 (interfaces detected: [])
	I1026 07:48:04.587560   14008 retry.go:31] will retry after 1.320146678s: waiting for domain to come up
	I1026 07:48:05.910045   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:05.910576   14008 main.go:141] libmachine: no network interface addresses found for domain addons-465751 (source=lease)
	I1026 07:48:05.910589   14008 main.go:141] libmachine: trying to list again with source=arp
	I1026 07:48:05.910828   14008 main.go:141] libmachine: unable to find current IP address of domain addons-465751 in network mk-addons-465751 (interfaces detected: [])
	I1026 07:48:05.910858   14008 retry.go:31] will retry after 1.623189084s: waiting for domain to come up
	I1026 07:48:07.536736   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:07.537264   14008 main.go:141] libmachine: no network interface addresses found for domain addons-465751 (source=lease)
	I1026 07:48:07.537275   14008 main.go:141] libmachine: trying to list again with source=arp
	I1026 07:48:07.537557   14008 main.go:141] libmachine: unable to find current IP address of domain addons-465751 in network mk-addons-465751 (interfaces detected: [])
	I1026 07:48:07.537584   14008 retry.go:31] will retry after 1.476334525s: waiting for domain to come up
	I1026 07:48:09.015683   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:09.016245   14008 main.go:141] libmachine: no network interface addresses found for domain addons-465751 (source=lease)
	I1026 07:48:09.016260   14008 main.go:141] libmachine: trying to list again with source=arp
	I1026 07:48:09.016537   14008 main.go:141] libmachine: unable to find current IP address of domain addons-465751 in network mk-addons-465751 (interfaces detected: [])
	I1026 07:48:09.016569   14008 retry.go:31] will retry after 2.354231892s: waiting for domain to come up
	I1026 07:48:11.374017   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:11.374554   14008 main.go:141] libmachine: no network interface addresses found for domain addons-465751 (source=lease)
	I1026 07:48:11.374568   14008 main.go:141] libmachine: trying to list again with source=arp
	I1026 07:48:11.374895   14008 main.go:141] libmachine: unable to find current IP address of domain addons-465751 in network mk-addons-465751 (interfaces detected: [])
	I1026 07:48:11.374929   14008 retry.go:31] will retry after 3.394847456s: waiting for domain to come up
	I1026 07:48:14.772195   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:14.772885   14008 main.go:141] libmachine: domain addons-465751 has current primary IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:14.772903   14008 main.go:141] libmachine: found domain IP: 192.168.39.128
	I1026 07:48:14.772911   14008 main.go:141] libmachine: reserving static IP address...
	I1026 07:48:14.773364   14008 main.go:141] libmachine: unable to find host DHCP lease matching {name: "addons-465751", mac: "52:54:00:70:d3:cf", ip: "192.168.39.128"} in network mk-addons-465751
	I1026 07:48:14.949420   14008 main.go:141] libmachine: reserved static IP address 192.168.39.128 for domain addons-465751
	I1026 07:48:14.949441   14008 main.go:141] libmachine: waiting for SSH...
	I1026 07:48:14.949447   14008 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 07:48:14.952181   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:14.952654   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:14.952680   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:14.952886   14008 main.go:141] libmachine: Using SSH client type: native
	I1026 07:48:14.953100   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I1026 07:48:14.953114   14008 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 07:48:15.053714   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 07:48:15.054065   14008 main.go:141] libmachine: domain creation complete
	I1026 07:48:15.055611   14008 machine.go:93] provisionDockerMachine start ...
	I1026 07:48:15.058121   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:15.058533   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:15.058555   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:15.058738   14008 main.go:141] libmachine: Using SSH client type: native
	I1026 07:48:15.058962   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I1026 07:48:15.058973   14008 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 07:48:15.158542   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 07:48:15.158570   14008 buildroot.go:166] provisioning hostname "addons-465751"
	I1026 07:48:15.161531   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:15.161975   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:15.162001   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:15.162199   14008 main.go:141] libmachine: Using SSH client type: native
	I1026 07:48:15.162399   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I1026 07:48:15.162414   14008 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-465751 && echo "addons-465751" | sudo tee /etc/hostname
	I1026 07:48:15.276874   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-465751
	
	I1026 07:48:15.279798   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:15.280231   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:15.280256   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:15.280466   14008 main.go:141] libmachine: Using SSH client type: native
	I1026 07:48:15.280692   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I1026 07:48:15.280710   14008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-465751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-465751/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-465751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 07:48:15.390004   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 07:48:15.390046   14008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9405/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9405/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9405/.minikube}
	I1026 07:48:15.390122   14008 buildroot.go:174] setting up certificates
	I1026 07:48:15.390134   14008 provision.go:84] configureAuth start
	I1026 07:48:15.393061   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:15.393498   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:15.393522   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:15.395857   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:15.396229   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:15.396255   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:15.396388   14008 provision.go:143] copyHostCerts
	I1026 07:48:15.396449   14008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9405/.minikube/ca.pem (1078 bytes)
	I1026 07:48:15.396587   14008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9405/.minikube/cert.pem (1123 bytes)
	I1026 07:48:15.396683   14008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9405/.minikube/key.pem (1675 bytes)
	I1026 07:48:15.396729   14008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9405/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca-key.pem org=jenkins.addons-465751 san=[127.0.0.1 192.168.39.128 addons-465751 localhost minikube]
	I1026 07:48:15.620387   14008 provision.go:177] copyRemoteCerts
	I1026 07:48:15.620444   14008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 07:48:15.622882   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:15.623292   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:15.623322   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:15.623508   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:15.703945   14008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 07:48:15.732647   14008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 07:48:15.760826   14008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 07:48:15.788358   14008 provision.go:87] duration metric: took 398.206529ms to configureAuth
	I1026 07:48:15.788398   14008 buildroot.go:189] setting minikube options for container-runtime
	I1026 07:48:15.788556   14008 config.go:182] Loaded profile config "addons-465751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:48:15.791157   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:15.791483   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:15.791503   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:15.791660   14008 main.go:141] libmachine: Using SSH client type: native
	I1026 07:48:15.791872   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I1026 07:48:15.791889   14008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 07:48:16.020028   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 07:48:16.020054   14008 machine.go:96] duration metric: took 964.424847ms to provisionDockerMachine
	I1026 07:48:16.020063   14008 client.go:171] duration metric: took 17.248356388s to LocalClient.Create
	I1026 07:48:16.020081   14008 start.go:167] duration metric: took 17.24841027s to libmachine.API.Create "addons-465751"
	I1026 07:48:16.020105   14008 start.go:293] postStartSetup for "addons-465751" (driver="kvm2")
	I1026 07:48:16.020118   14008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 07:48:16.020176   14008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 07:48:16.023169   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:16.023607   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:16.023633   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:16.023778   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:16.103299   14008 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 07:48:16.108074   14008 info.go:137] Remote host: Buildroot 2025.02
	I1026 07:48:16.108133   14008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9405/.minikube/addons for local assets ...
	I1026 07:48:16.108203   14008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9405/.minikube/files for local assets ...
	I1026 07:48:16.108227   14008 start.go:296] duration metric: took 88.115377ms for postStartSetup
	I1026 07:48:16.111300   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:16.111714   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:16.111738   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:16.112003   14008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/config.json ...
	I1026 07:48:16.112216   14008 start.go:128] duration metric: took 17.342135327s to createHost
	I1026 07:48:16.114750   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:16.115181   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:16.115204   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:16.115408   14008 main.go:141] libmachine: Using SSH client type: native
	I1026 07:48:16.115633   14008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I1026 07:48:16.115648   14008 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 07:48:16.214271   14008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761464896.168619543
	
	I1026 07:48:16.214302   14008 fix.go:216] guest clock: 1761464896.168619543
	I1026 07:48:16.214312   14008 fix.go:229] Guest: 2025-10-26 07:48:16.168619543 +0000 UTC Remote: 2025-10-26 07:48:16.112228558 +0000 UTC m=+17.437582463 (delta=56.390985ms)
	I1026 07:48:16.214332   14008 fix.go:200] guest clock delta is within tolerance: 56.390985ms
	I1026 07:48:16.214337   14008 start.go:83] releasing machines lock for "addons-465751", held for 17.444315602s
	I1026 07:48:16.217147   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:16.217504   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:16.217522   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:16.218072   14008 ssh_runner.go:195] Run: cat /version.json
	I1026 07:48:16.218151   14008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 07:48:16.221108   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:16.221508   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:16.221546   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:16.221576   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:16.221743   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:16.222104   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:16.222137   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:16.222311   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:16.328556   14008 ssh_runner.go:195] Run: systemctl --version
	I1026 07:48:16.335177   14008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 07:48:16.490762   14008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 07:48:16.499736   14008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 07:48:16.499808   14008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 07:48:16.522473   14008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 07:48:16.522504   14008 start.go:495] detecting cgroup driver to use...
	I1026 07:48:16.522573   14008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 07:48:16.547394   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 07:48:16.564884   14008 docker.go:218] disabling cri-docker service (if available) ...
	I1026 07:48:16.564955   14008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 07:48:16.581978   14008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 07:48:16.598056   14008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 07:48:16.738871   14008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 07:48:16.944856   14008 docker.go:234] disabling docker service ...
	I1026 07:48:16.944917   14008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 07:48:16.962150   14008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 07:48:16.977063   14008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 07:48:17.134673   14008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 07:48:17.274441   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 07:48:17.289487   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 07:48:17.311009   14008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 07:48:17.311103   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:48:17.322929   14008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 07:48:17.322998   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:48:17.334732   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:48:17.346694   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:48:17.358744   14008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 07:48:17.371266   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:48:17.382814   14008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:48:17.402221   14008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:48:17.414695   14008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 07:48:17.424705   14008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 07:48:17.424778   14008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 07:48:17.444777   14008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 07:48:17.456313   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 07:48:17.596828   14008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 07:48:17.708442   14008 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 07:48:17.708518   14008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 07:48:17.713887   14008 start.go:563] Will wait 60s for crictl version
	I1026 07:48:17.713968   14008 ssh_runner.go:195] Run: which crictl
	I1026 07:48:17.717809   14008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 07:48:17.759557   14008 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 07:48:17.759697   14008 ssh_runner.go:195] Run: crio --version
	I1026 07:48:17.788944   14008 ssh_runner.go:195] Run: crio --version
	I1026 07:48:17.819018   14008 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1026 07:48:17.823315   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:17.823728   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:17.823754   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:17.823991   14008 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 07:48:17.828493   14008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 07:48:17.843535   14008 kubeadm.go:883] updating cluster {Name:addons-465751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-465751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 07:48:17.843655   14008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 07:48:17.843716   14008 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 07:48:17.877883   14008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1026 07:48:17.877954   14008 ssh_runner.go:195] Run: which lz4
	I1026 07:48:17.882233   14008 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 07:48:17.887073   14008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 07:48:17.887126   14008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1026 07:48:19.181985   14008 crio.go:462] duration metric: took 1.299793551s to copy over tarball
	I1026 07:48:19.182045   14008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 07:48:20.797226   14008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.615152245s)
	I1026 07:48:20.797252   14008 crio.go:469] duration metric: took 1.615242362s to extract the tarball
	I1026 07:48:20.797259   14008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 07:48:20.840276   14008 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 07:48:20.883911   14008 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 07:48:20.883938   14008 cache_images.go:85] Images are preloaded, skipping loading
	I1026 07:48:20.883947   14008 kubeadm.go:934] updating node { 192.168.39.128 8443 v1.34.1 crio true true} ...
	I1026 07:48:20.884044   14008 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-465751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-465751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 07:48:20.884123   14008 ssh_runner.go:195] Run: crio config
	I1026 07:48:20.930573   14008 cni.go:84] Creating CNI manager for ""
	I1026 07:48:20.930598   14008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 07:48:20.930613   14008 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 07:48:20.930643   14008 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-465751 NodeName:addons-465751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 07:48:20.930745   14008 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-465751"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.128"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 07:48:20.930804   14008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 07:48:20.942593   14008 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 07:48:20.942670   14008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 07:48:20.953825   14008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1026 07:48:20.975673   14008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 07:48:20.996800   14008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1026 07:48:21.015885   14008 ssh_runner.go:195] Run: grep 192.168.39.128	control-plane.minikube.internal$ /etc/hosts
	I1026 07:48:21.019862   14008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 07:48:21.034268   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 07:48:21.168279   14008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 07:48:21.198603   14008 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751 for IP: 192.168.39.128
	I1026 07:48:21.198633   14008 certs.go:195] generating shared ca certs ...
	I1026 07:48:21.198654   14008 certs.go:227] acquiring lock for ca certs: {Name:mk0cc452f34380f71cd1e1f6ef82498430bd406d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:21.198831   14008 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9405/.minikube/ca.key
	I1026 07:48:21.612727   14008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9405/.minikube/ca.crt ...
	I1026 07:48:21.612754   14008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/.minikube/ca.crt: {Name:mk633ca73d8c6e9deff2e3b47cd163c74912d197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:21.612913   14008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9405/.minikube/ca.key ...
	I1026 07:48:21.612924   14008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/.minikube/ca.key: {Name:mk1f0020e9a52cd8af7936a4f0c59fcca90b29a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:21.612993   14008 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9405/.minikube/proxy-client-ca.key
	I1026 07:48:21.831960   14008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9405/.minikube/proxy-client-ca.crt ...
	I1026 07:48:21.831985   14008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/.minikube/proxy-client-ca.crt: {Name:mkabd52f6b9bb1f96f92f2b57896b46e6e0848bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:21.832150   14008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9405/.minikube/proxy-client-ca.key ...
	I1026 07:48:21.832161   14008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/.minikube/proxy-client-ca.key: {Name:mk9237b14acb0102e212e8965e3f08a13f3760e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:21.832226   14008 certs.go:257] generating profile certs ...
	I1026 07:48:21.832286   14008 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.key
	I1026 07:48:21.832305   14008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt with IP's: []
	I1026 07:48:21.857372   14008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt ...
	I1026 07:48:21.857392   14008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: {Name:mk6fb7a5fa6047e0c0af902e7fa5e83550f026b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:21.857521   14008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.key ...
	I1026 07:48:21.857531   14008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.key: {Name:mk0cfc0e43205f95bc932d25ceca0cc203c0fa8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:21.857612   14008 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/apiserver.key.ea36de47
	I1026 07:48:21.857633   14008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/apiserver.crt.ea36de47 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128]
	I1026 07:48:22.049111   14008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/apiserver.crt.ea36de47 ...
	I1026 07:48:22.049139   14008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/apiserver.crt.ea36de47: {Name:mkf17aea47e9fd61e0a42a7b3330eb2c9bde56ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:22.049286   14008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/apiserver.key.ea36de47 ...
	I1026 07:48:22.049298   14008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/apiserver.key.ea36de47: {Name:mk1645b8e96d7a0690a8d011ca02ea55e2d22604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:22.049361   14008 certs.go:382] copying /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/apiserver.crt.ea36de47 -> /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/apiserver.crt
	I1026 07:48:22.049440   14008 certs.go:386] copying /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/apiserver.key.ea36de47 -> /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/apiserver.key
	I1026 07:48:22.049487   14008 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/proxy-client.key
	I1026 07:48:22.049503   14008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/proxy-client.crt with IP's: []
	I1026 07:48:22.114584   14008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/proxy-client.crt ...
	I1026 07:48:22.114613   14008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/proxy-client.crt: {Name:mk4f38e218e6212d62c6b0c303fdd6c4f1f8dd48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:22.114756   14008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/proxy-client.key ...
	I1026 07:48:22.114766   14008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/proxy-client.key: {Name:mk2280c0b95c0d21aa114ae4dd10e35861f06af2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:22.114921   14008 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 07:48:22.114954   14008 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca.pem (1078 bytes)
	I1026 07:48:22.114974   14008 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/cert.pem (1123 bytes)
	I1026 07:48:22.114992   14008 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/key.pem (1675 bytes)
	I1026 07:48:22.115518   14008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 07:48:22.148592   14008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 07:48:22.185451   14008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 07:48:22.215697   14008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1026 07:48:22.245599   14008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 07:48:22.275995   14008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 07:48:22.305390   14008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 07:48:22.333998   14008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 07:48:22.364135   14008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 07:48:22.394444   14008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 07:48:22.415131   14008 ssh_runner.go:195] Run: openssl version
	I1026 07:48:22.422320   14008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 07:48:22.436173   14008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 07:48:22.441755   14008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:48 /usr/share/ca-certificates/minikubeCA.pem
	I1026 07:48:22.441813   14008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 07:48:22.449327   14008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 07:48:22.462893   14008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 07:48:22.467780   14008 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 07:48:22.467833   14008 kubeadm.go:400] StartCluster: {Name:addons-465751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-465751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 07:48:22.467894   14008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:48:22.467965   14008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:48:22.513942   14008 cri.go:89] found id: ""
	I1026 07:48:22.514001   14008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 07:48:22.526702   14008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 07:48:22.539017   14008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 07:48:22.550271   14008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 07:48:22.550286   14008 kubeadm.go:157] found existing configuration files:
	
	I1026 07:48:22.550324   14008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 07:48:22.560493   14008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 07:48:22.560543   14008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 07:48:22.572194   14008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 07:48:22.583656   14008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 07:48:22.583718   14008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 07:48:22.595414   14008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 07:48:22.605592   14008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 07:48:22.605658   14008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 07:48:22.616259   14008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 07:48:22.626200   14008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 07:48:22.626258   14008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 07:48:22.637522   14008 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 07:48:22.694757   14008 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 07:48:22.695390   14008 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 07:48:22.802980   14008 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 07:48:22.803147   14008 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 07:48:22.803388   14008 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 07:48:22.815327   14008 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 07:48:22.955212   14008 out.go:252]   - Generating certificates and keys ...
	I1026 07:48:22.955349   14008 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 07:48:22.955456   14008 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 07:48:23.291182   14008 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 07:48:23.541235   14008 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 07:48:23.944061   14008 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 07:48:24.046165   14008 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 07:48:24.283473   14008 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 07:48:24.283595   14008 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-465751 localhost] and IPs [192.168.39.128 127.0.0.1 ::1]
	I1026 07:48:25.022671   14008 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 07:48:25.022822   14008 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-465751 localhost] and IPs [192.168.39.128 127.0.0.1 ::1]
	I1026 07:48:25.238564   14008 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 07:48:25.414637   14008 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 07:48:25.538521   14008 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 07:48:25.538584   14008 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 07:48:26.007493   14008 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 07:48:26.296392   14008 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 07:48:26.482891   14008 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 07:48:26.733055   14008 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 07:48:27.095646   14008 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 07:48:27.096259   14008 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 07:48:27.098462   14008 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 07:48:27.100269   14008 out.go:252]   - Booting up control plane ...
	I1026 07:48:27.100358   14008 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 07:48:27.101113   14008 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 07:48:27.101301   14008 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 07:48:27.123912   14008 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 07:48:27.124117   14008 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 07:48:27.130946   14008 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 07:48:27.131258   14008 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 07:48:27.131334   14008 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 07:48:27.309775   14008 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 07:48:27.309877   14008 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 07:48:27.811406   14008 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.668735ms
	I1026 07:48:27.814176   14008 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 07:48:27.814298   14008 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.128:8443/livez
	I1026 07:48:27.814411   14008 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 07:48:27.814486   14008 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 07:48:30.105714   14008 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.294400629s
	I1026 07:48:31.797558   14008 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.988127835s
	I1026 07:48:33.809246   14008 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001633838s
	I1026 07:48:33.822776   14008 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 07:48:34.647709   14008 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 07:48:34.755987   14008 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 07:48:34.756256   14008 kubeadm.go:318] [mark-control-plane] Marking the node addons-465751 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 07:48:34.772967   14008 kubeadm.go:318] [bootstrap-token] Using token: 32ep9c.f21701lh7z43b0mv
	I1026 07:48:34.774233   14008 out.go:252]   - Configuring RBAC rules ...
	I1026 07:48:34.774408   14008 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 07:48:34.780422   14008 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 07:48:34.789220   14008 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 07:48:34.794961   14008 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 07:48:34.802185   14008 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 07:48:34.806376   14008 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 07:48:34.820477   14008 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 07:48:35.095994   14008 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 07:48:35.588288   14008 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 07:48:35.590136   14008 kubeadm.go:318] 
	I1026 07:48:35.590216   14008 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 07:48:35.590228   14008 kubeadm.go:318] 
	I1026 07:48:35.590363   14008 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 07:48:35.590393   14008 kubeadm.go:318] 
	I1026 07:48:35.590437   14008 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 07:48:35.590522   14008 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 07:48:35.590624   14008 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 07:48:35.590637   14008 kubeadm.go:318] 
	I1026 07:48:35.590713   14008 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 07:48:35.590725   14008 kubeadm.go:318] 
	I1026 07:48:35.590804   14008 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 07:48:35.590821   14008 kubeadm.go:318] 
	I1026 07:48:35.590898   14008 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 07:48:35.591003   14008 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 07:48:35.591133   14008 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 07:48:35.591159   14008 kubeadm.go:318] 
	I1026 07:48:35.591274   14008 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 07:48:35.591389   14008 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 07:48:35.591403   14008 kubeadm.go:318] 
	I1026 07:48:35.591535   14008 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 32ep9c.f21701lh7z43b0mv \
	I1026 07:48:35.591705   14008 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:67e85fc6b3cf837877dc5cb26dabef1bb4195b96c47ae2d0929f6f4266adb167 \
	I1026 07:48:35.591739   14008 kubeadm.go:318] 	--control-plane 
	I1026 07:48:35.591753   14008 kubeadm.go:318] 
	I1026 07:48:35.591857   14008 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 07:48:35.591866   14008 kubeadm.go:318] 
	I1026 07:48:35.591973   14008 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 32ep9c.f21701lh7z43b0mv \
	I1026 07:48:35.592118   14008 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:67e85fc6b3cf837877dc5cb26dabef1bb4195b96c47ae2d0929f6f4266adb167 
	I1026 07:48:35.594367   14008 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 07:48:35.594398   14008 cni.go:84] Creating CNI manager for ""
	I1026 07:48:35.594408   14008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 07:48:35.596071   14008 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 07:48:35.597412   14008 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 07:48:35.612998   14008 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 07:48:35.637753   14008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 07:48:35.637810   14008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:35.637842   14008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-465751 minikube.k8s.io/updated_at=2025_10_26T07_48_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=addons-465751 minikube.k8s.io/primary=true
	I1026 07:48:35.768980   14008 ops.go:34] apiserver oom_adj: -16
	I1026 07:48:35.769040   14008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:36.270107   14008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:36.769255   14008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:37.269874   14008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:37.769699   14008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:38.269349   14008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:38.769551   14008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:39.269374   14008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:39.770162   14008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:39.851284   14008 kubeadm.go:1113] duration metric: took 4.213528293s to wait for elevateKubeSystemPrivileges
	I1026 07:48:39.851320   14008 kubeadm.go:402] duration metric: took 17.383492068s to StartCluster
	I1026 07:48:39.851338   14008 settings.go:142] acquiring lock: {Name:mkae317b35dec50359a6773585fd9b9fe6191d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:39.851483   14008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9405/kubeconfig
	I1026 07:48:39.851842   14008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/kubeconfig: {Name:mk03435388f71a675261bd85aa1ac6a9492586b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:39.852033   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 07:48:39.852041   14008 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 07:48:39.852065   14008 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1026 07:48:39.852183   14008 addons.go:69] Setting yakd=true in profile "addons-465751"
	I1026 07:48:39.852194   14008 addons.go:69] Setting inspektor-gadget=true in profile "addons-465751"
	I1026 07:48:39.852209   14008 addons.go:238] Setting addon inspektor-gadget=true in "addons-465751"
	I1026 07:48:39.852213   14008 addons.go:69] Setting metrics-server=true in profile "addons-465751"
	I1026 07:48:39.852222   14008 addons.go:69] Setting default-storageclass=true in profile "addons-465751"
	I1026 07:48:39.852243   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.852245   14008 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-465751"
	I1026 07:48:39.852248   14008 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-465751"
	I1026 07:48:39.852257   14008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-465751"
	I1026 07:48:39.852265   14008 addons.go:69] Setting volcano=true in profile "addons-465751"
	I1026 07:48:39.852264   14008 config.go:182] Loaded profile config "addons-465751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:48:39.852276   14008 addons.go:238] Setting addon volcano=true in "addons-465751"
	I1026 07:48:39.852279   14008 addons.go:69] Setting ingress-dns=true in profile "addons-465751"
	I1026 07:48:39.852294   14008 addons.go:238] Setting addon ingress-dns=true in "addons-465751"
	I1026 07:48:39.852280   14008 addons.go:69] Setting ingress=true in profile "addons-465751"
	I1026 07:48:39.852314   14008 addons.go:69] Setting cloud-spanner=true in profile "addons-465751"
	I1026 07:48:39.852316   14008 addons.go:69] Setting gcp-auth=true in profile "addons-465751"
	I1026 07:48:39.852325   14008 addons.go:238] Setting addon cloud-spanner=true in "addons-465751"
	I1026 07:48:39.852329   14008 addons.go:238] Setting addon ingress=true in "addons-465751"
	I1026 07:48:39.852332   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.852336   14008 mustload.go:65] Loading cluster: addons-465751
	I1026 07:48:39.852339   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.852361   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.852498   14008 config.go:182] Loaded profile config "addons-465751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:48:39.852558   14008 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-465751"
	I1026 07:48:39.852592   14008 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-465751"
	I1026 07:48:39.852627   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.853235   14008 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-465751"
	I1026 07:48:39.853262   14008 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-465751"
	I1026 07:48:39.853285   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.852235   14008 addons.go:69] Setting storage-provisioner=true in profile "addons-465751"
	I1026 07:48:39.853449   14008 addons.go:238] Setting addon storage-provisioner=true in "addons-465751"
	I1026 07:48:39.853476   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.852263   14008 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-465751"
	I1026 07:48:39.853621   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.852260   14008 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-465751"
	I1026 07:48:39.852267   14008 addons.go:69] Setting registry=true in profile "addons-465751"
	I1026 07:48:39.853667   14008 addons.go:238] Setting addon registry=true in "addons-465751"
	I1026 07:48:39.853873   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.852303   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.852239   14008 addons.go:238] Setting addon metrics-server=true in "addons-465751"
	I1026 07:48:39.854077   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.854345   14008 out.go:179] * Verifying Kubernetes components...
	I1026 07:48:39.852209   14008 addons.go:238] Setting addon yakd=true in "addons-465751"
	I1026 07:48:39.852272   14008 addons.go:69] Setting registry-creds=true in profile "addons-465751"
	I1026 07:48:39.854706   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.854718   14008 addons.go:238] Setting addon registry-creds=true in "addons-465751"
	I1026 07:48:39.854745   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.852307   14008 addons.go:69] Setting volumesnapshots=true in profile "addons-465751"
	I1026 07:48:39.854819   14008 addons.go:238] Setting addon volumesnapshots=true in "addons-465751"
	I1026 07:48:39.854839   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.855805   14008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 07:48:39.858159   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.858663   14008 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1026 07:48:39.859995   14008 addons.go:238] Setting addon default-storageclass=true in "addons-465751"
	I1026 07:48:39.860045   14008 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1026 07:48:39.860051   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.860119   14008 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 07:48:39.860047   14008 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1026 07:48:39.860206   14008 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1026 07:48:39.860521   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 07:48:39.862134   14008 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 07:48:39.862175   14008 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1026 07:48:39.862180   14008 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1026 07:48:39.862215   14008 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 07:48:39.862213   14008 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1026 07:48:39.862135   14008 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 07:48:39.862227   14008 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	W1026 07:48:39.862415   14008 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1026 07:48:39.862777   14008 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-465751"
	I1026 07:48:39.863529   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:39.863564   14008 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 07:48:39.863948   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 07:48:39.863586   14008 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 07:48:39.864150   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1026 07:48:39.863561   14008 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 07:48:39.864225   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1026 07:48:39.864331   14008 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 07:48:39.864347   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 07:48:39.863608   14008 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 07:48:39.864683   14008 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 07:48:39.864956   14008 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 07:48:39.865721   14008 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1026 07:48:39.865729   14008 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1026 07:48:39.865727   14008 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1026 07:48:39.865730   14008 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 07:48:39.865751   14008 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1026 07:48:39.866457   14008 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 07:48:39.867057   14008 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 07:48:39.867064   14008 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 07:48:39.867073   14008 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 07:48:39.867074   14008 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 07:48:39.867158   14008 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 07:48:39.867331   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1026 07:48:39.867617   14008 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1026 07:48:39.867634   14008 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1026 07:48:39.867621   14008 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 07:48:39.868354   14008 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 07:48:39.868360   14008 out.go:179]   - Using image docker.io/registry:3.0.0
	I1026 07:48:39.869054   14008 out.go:179]   - Using image docker.io/busybox:stable
	I1026 07:48:39.869182   14008 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 07:48:39.869422   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1026 07:48:39.869832   14008 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 07:48:39.869847   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.869855   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1026 07:48:39.871120   14008 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 07:48:39.871804   14008 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 07:48:39.872097   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.872183   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.872749   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.873184   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:39.873320   14008 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 07:48:39.873335   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 07:48:39.873925   14008 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 07:48:39.874643   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.874675   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.875177   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.875556   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:39.875837   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.876105   14008 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1026 07:48:39.876333   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.876650   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.876967   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.876999   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.877213   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.877246   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.877391   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.877784   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:39.877899   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:39.877993   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.878025   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.878256   14008 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1026 07:48:39.878513   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:39.878474   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.878647   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.879011   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.879252   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.879323   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:39.879625   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.879681   14008 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 07:48:39.879700   14008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 07:48:39.879978   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:39.880166   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.880321   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.880658   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.880975   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.881002   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.881009   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.881285   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.881333   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.881364   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.881398   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.881411   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:39.881533   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.881561   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.881811   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:39.881864   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:39.881882   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.882224   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.882234   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:39.882258   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.882718   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:39.882957   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.882987   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.883056   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.883374   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:39.883732   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.883766   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.883962   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:39.884872   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.885252   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:39.885275   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:39.885406   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	W1026 07:48:40.186577   14008 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56898->192.168.39.128:22: read: connection reset by peer
	I1026 07:48:40.186605   14008 retry.go:31] will retry after 141.819054ms: ssh: handshake failed: read tcp 192.168.39.1:56898->192.168.39.128:22: read: connection reset by peer
	I1026 07:48:40.673904   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 07:48:40.721392   14008 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1026 07:48:40.721426   14008 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1026 07:48:40.721768   14008 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 07:48:40.721791   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 07:48:40.778410   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 07:48:40.896425   14008 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 07:48:40.896457   14008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 07:48:40.911422   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 07:48:40.948206   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 07:48:40.949917   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 07:48:40.960059   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 07:48:40.968941   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 07:48:40.972597   14008 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 07:48:40.972622   14008 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 07:48:40.985274   14008 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:40.985293   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1026 07:48:40.991462   14008 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 07:48:40.991484   14008 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 07:48:41.006158   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 07:48:41.059978   14008 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1026 07:48:41.060009   14008 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1026 07:48:41.329458   14008 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 07:48:41.329486   14008 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 07:48:41.492321   14008 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 07:48:41.492351   14008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 07:48:41.550225   14008 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1026 07:48:41.550251   14008 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1026 07:48:41.588255   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 07:48:41.594752   14008 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 07:48:41.594769   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 07:48:41.606062   14008 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 07:48:41.606096   14008 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 07:48:41.623996   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:41.715271   14008 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 07:48:41.715302   14008 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 07:48:41.723337   14008 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 07:48:41.723369   14008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 07:48:41.751450   14008 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.899381217s)
	I1026 07:48:41.751479   14008 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.895646797s)
	I1026 07:48:41.751565   14008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 07:48:41.751666   14008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 07:48:41.842028   14008 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1026 07:48:41.842050   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1026 07:48:41.860308   14008 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 07:48:41.860340   14008 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 07:48:41.889722   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 07:48:41.984169   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 07:48:42.020378   14008 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 07:48:42.020415   14008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 07:48:42.169398   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1026 07:48:42.239966   14008 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 07:48:42.240003   14008 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 07:48:42.429401   14008 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 07:48:42.429434   14008 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 07:48:42.629646   14008 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 07:48:42.629671   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 07:48:42.830350   14008 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 07:48:42.830374   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 07:48:43.089387   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 07:48:43.200672   14008 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 07:48:43.200705   14008 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 07:48:43.546406   14008 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 07:48:43.546438   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 07:48:43.938692   14008 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 07:48:43.938718   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 07:48:44.249025   14008 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 07:48:44.249054   14008 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 07:48:44.581891   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 07:48:46.124556   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.450609557s)
	I1026 07:48:46.124631   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.346187604s)
	I1026 07:48:46.124721   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.213262353s)
	I1026 07:48:46.124801   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.176570449s)
	I1026 07:48:46.124856   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.174910258s)
	I1026 07:48:46.124961   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.164871949s)
	I1026 07:48:47.357906   14008 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 07:48:47.361243   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:47.361755   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:47.361789   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:47.361961   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:47.555963   14008 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 07:48:47.636095   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.667100133s)
	I1026 07:48:47.636143   14008 addons.go:479] Verifying addon ingress=true in "addons-465751"
	I1026 07:48:47.636168   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.629979496s)
	I1026 07:48:47.636241   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.047959364s)
	I1026 07:48:47.636332   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.012311207s)
	W1026 07:48:47.636373   14008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:47.636376   14008 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.884685531s)
	I1026 07:48:47.636397   14008 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1026 07:48:47.636399   14008 retry.go:31] will retry after 157.82107ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:47.636449   14008 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.884854544s)
	I1026 07:48:47.636501   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.746739151s)
	I1026 07:48:47.636529   14008 addons.go:479] Verifying addon registry=true in "addons-465751"
	I1026 07:48:47.636574   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.652371534s)
	I1026 07:48:47.636597   14008 addons.go:479] Verifying addon metrics-server=true in "addons-465751"
	I1026 07:48:47.636703   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.467262911s)
	I1026 07:48:47.637348   14008 node_ready.go:35] waiting up to 6m0s for node "addons-465751" to be "Ready" ...
	I1026 07:48:47.637813   14008 out.go:179] * Verifying ingress addon...
	I1026 07:48:47.638446   14008 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-465751 service yakd-dashboard -n yakd-dashboard
	
	I1026 07:48:47.638461   14008 out.go:179] * Verifying registry addon...
	I1026 07:48:47.639844   14008 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 07:48:47.640554   14008 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 07:48:47.645874   14008 addons.go:238] Setting addon gcp-auth=true in "addons-465751"
	I1026 07:48:47.645923   14008 host.go:66] Checking if "addons-465751" exists ...
	I1026 07:48:47.647984   14008 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 07:48:47.650784   14008 main.go:141] libmachine: domain addons-465751 has defined MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:47.651277   14008 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:d3:cf", ip: ""} in network mk-addons-465751: {Iface:virbr1 ExpiryTime:2025-10-26 08:48:14 +0000 UTC Type:0 Mac:52:54:00:70:d3:cf Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:addons-465751 Clientid:01:52:54:00:70:d3:cf}
	I1026 07:48:47.651301   14008 main.go:141] libmachine: domain addons-465751 has defined IP address 192.168.39.128 and MAC address 52:54:00:70:d3:cf in network mk-addons-465751
	I1026 07:48:47.651445   14008 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/addons-465751/id_rsa Username:docker}
	I1026 07:48:47.740164   14008 node_ready.go:49] node "addons-465751" is "Ready"
	I1026 07:48:47.740202   14008 node_ready.go:38] duration metric: took 102.822246ms for node "addons-465751" to be "Ready" ...
	I1026 07:48:47.740218   14008 api_server.go:52] waiting for apiserver process to appear ...
	I1026 07:48:47.740276   14008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 07:48:47.795248   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:47.885070   14008 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 07:48:47.885123   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:47.885080   14008 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 07:48:47.885149   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:48.220520   14008 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-465751" context rescaled to 1 replicas
	I1026 07:48:48.246307   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:48.246558   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:48.462008   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.372570224s)
	W1026 07:48:48.462058   14008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 07:48:48.462097   14008 retry.go:31] will retry after 231.496753ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 07:48:48.649103   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:48.651547   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:48.694287   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 07:48:49.295152   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:49.295291   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:49.320887   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.738937877s)
	I1026 07:48:49.320927   14008 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-465751"
	I1026 07:48:49.320951   14008 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.58065382s)
	I1026 07:48:49.320981   14008 api_server.go:72] duration metric: took 9.468920998s to wait for apiserver process to appear ...
	I1026 07:48:49.320989   14008 api_server.go:88] waiting for apiserver healthz status ...
	I1026 07:48:49.321006   14008 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.672998272s)
	I1026 07:48:49.321017   14008 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I1026 07:48:49.322475   14008 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 07:48:49.322488   14008 out.go:179] * Verifying csi-hostpath-driver addon...
	I1026 07:48:49.324095   14008 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1026 07:48:49.324774   14008 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 07:48:49.325291   14008 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 07:48:49.325304   14008 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 07:48:49.337095   14008 api_server.go:279] https://192.168.39.128:8443/healthz returned 200:
	ok
	I1026 07:48:49.343492   14008 api_server.go:141] control plane version: v1.34.1
	I1026 07:48:49.343551   14008 api_server.go:131] duration metric: took 22.549733ms to wait for apiserver health ...
	I1026 07:48:49.343564   14008 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 07:48:49.351962   14008 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 07:48:49.351986   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:49.375507   14008 system_pods.go:59] 20 kube-system pods found
	I1026 07:48:49.375547   14008 system_pods.go:61] "amd-gpu-device-plugin-bn844" [d08c1ce9-d708-42b0-9733-b1fc34e50760] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 07:48:49.375558   14008 system_pods.go:61] "coredns-66bc5c9577-2chmb" [661c6096-625d-4290-a653-628b78de64ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 07:48:49.375570   14008 system_pods.go:61] "coredns-66bc5c9577-kbfd9" [c915e111-5241-43dd-9f41-4920ebfae2dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 07:48:49.375584   14008 system_pods.go:61] "csi-hostpath-attacher-0" [594f843c-e541-4757-a1c9-268357be417c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 07:48:49.375591   14008 system_pods.go:61] "csi-hostpath-resizer-0" [f058713a-a26f-4d38-a2bb-d04efb34adf8] Pending
	I1026 07:48:49.375597   14008 system_pods.go:61] "csi-hostpathplugin-hds4v" [05f50167-0933-4ef4-b6ee-fe8d3650d49b] Pending
	I1026 07:48:49.375605   14008 system_pods.go:61] "etcd-addons-465751" [bef84dd1-78ad-4cb1-9bcc-d078f82f0229] Running
	I1026 07:48:49.375610   14008 system_pods.go:61] "kube-apiserver-addons-465751" [8e86b639-6d89-4dba-a1bb-f020a3c5f05c] Running
	I1026 07:48:49.375615   14008 system_pods.go:61] "kube-controller-manager-addons-465751" [c8e56ba2-0ffc-4e82-89c9-b18fb3fd699b] Running
	I1026 07:48:49.375624   14008 system_pods.go:61] "kube-ingress-dns-minikube" [b9d177cf-2ef7-4e93-bb7f-9a690e8482c3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 07:48:49.375632   14008 system_pods.go:61] "kube-proxy-jfndh" [5565ff2a-a1fd-4447-b5ce-1bb3343e6cc5] Running
	I1026 07:48:49.375638   14008 system_pods.go:61] "kube-scheduler-addons-465751" [f7ea8388-4ba3-44d9-9c67-cd5be0d2b5f4] Running
	I1026 07:48:49.375646   14008 system_pods.go:61] "metrics-server-85b7d694d7-nlhsw" [36bbab07-ba45-458e-85a2-28fa2305a5ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 07:48:49.375658   14008 system_pods.go:61] "nvidia-device-plugin-daemonset-qph55" [d3f4da58-871d-4071-9b3d-e686cde31287] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 07:48:49.375667   14008 system_pods.go:61] "registry-6b586f9694-6z556" [e3274c78-922e-4531-bf22-ada2d7ee76ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 07:48:49.375677   14008 system_pods.go:61] "registry-creds-764b6fb674-nqxtz" [251ec8ac-c063-4c78-9619-e0bd25155a92] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 07:48:49.375686   14008 system_pods.go:61] "registry-proxy-9pmnx" [de45aec4-aed2-4c08-a39d-e1f65e28899e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 07:48:49.375697   14008 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qxz65" [6da62943-838b-449d-910a-e29fd984690a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:49.375706   14008 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xhq9w" [2ee038af-e28f-4f03-909d-d7c54dcf0474] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:49.375718   14008 system_pods.go:61] "storage-provisioner" [e909bb90-9eac-4d89-bb56-bf518cc23c65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 07:48:49.375729   14008 system_pods.go:74] duration metric: took 32.156749ms to wait for pod list to return data ...
	I1026 07:48:49.375743   14008 default_sa.go:34] waiting for default service account to be created ...
	I1026 07:48:49.417340   14008 default_sa.go:45] found service account: "default"
	I1026 07:48:49.417369   14008 default_sa.go:55] duration metric: took 41.618438ms for default service account to be created ...
	I1026 07:48:49.417380   14008 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 07:48:49.449743   14008 system_pods.go:86] 20 kube-system pods found
	I1026 07:48:49.449778   14008 system_pods.go:89] "amd-gpu-device-plugin-bn844" [d08c1ce9-d708-42b0-9733-b1fc34e50760] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 07:48:49.449788   14008 system_pods.go:89] "coredns-66bc5c9577-2chmb" [661c6096-625d-4290-a653-628b78de64ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 07:48:49.449799   14008 system_pods.go:89] "coredns-66bc5c9577-kbfd9" [c915e111-5241-43dd-9f41-4920ebfae2dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 07:48:49.449807   14008 system_pods.go:89] "csi-hostpath-attacher-0" [594f843c-e541-4757-a1c9-268357be417c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 07:48:49.449813   14008 system_pods.go:89] "csi-hostpath-resizer-0" [f058713a-a26f-4d38-a2bb-d04efb34adf8] Pending
	I1026 07:48:49.449828   14008 system_pods.go:89] "csi-hostpathplugin-hds4v" [05f50167-0933-4ef4-b6ee-fe8d3650d49b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 07:48:49.449838   14008 system_pods.go:89] "etcd-addons-465751" [bef84dd1-78ad-4cb1-9bcc-d078f82f0229] Running
	I1026 07:48:49.449844   14008 system_pods.go:89] "kube-apiserver-addons-465751" [8e86b639-6d89-4dba-a1bb-f020a3c5f05c] Running
	I1026 07:48:49.449851   14008 system_pods.go:89] "kube-controller-manager-addons-465751" [c8e56ba2-0ffc-4e82-89c9-b18fb3fd699b] Running
	I1026 07:48:49.449860   14008 system_pods.go:89] "kube-ingress-dns-minikube" [b9d177cf-2ef7-4e93-bb7f-9a690e8482c3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 07:48:49.449866   14008 system_pods.go:89] "kube-proxy-jfndh" [5565ff2a-a1fd-4447-b5ce-1bb3343e6cc5] Running
	I1026 07:48:49.449873   14008 system_pods.go:89] "kube-scheduler-addons-465751" [f7ea8388-4ba3-44d9-9c67-cd5be0d2b5f4] Running
	I1026 07:48:49.449881   14008 system_pods.go:89] "metrics-server-85b7d694d7-nlhsw" [36bbab07-ba45-458e-85a2-28fa2305a5ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 07:48:49.449890   14008 system_pods.go:89] "nvidia-device-plugin-daemonset-qph55" [d3f4da58-871d-4071-9b3d-e686cde31287] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 07:48:49.449899   14008 system_pods.go:89] "registry-6b586f9694-6z556" [e3274c78-922e-4531-bf22-ada2d7ee76ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 07:48:49.449912   14008 system_pods.go:89] "registry-creds-764b6fb674-nqxtz" [251ec8ac-c063-4c78-9619-e0bd25155a92] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 07:48:49.449920   14008 system_pods.go:89] "registry-proxy-9pmnx" [de45aec4-aed2-4c08-a39d-e1f65e28899e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 07:48:49.449931   14008 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qxz65" [6da62943-838b-449d-910a-e29fd984690a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:49.449946   14008 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xhq9w" [2ee038af-e28f-4f03-909d-d7c54dcf0474] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:49.449957   14008 system_pods.go:89] "storage-provisioner" [e909bb90-9eac-4d89-bb56-bf518cc23c65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 07:48:49.449967   14008 system_pods.go:126] duration metric: took 32.580748ms to wait for k8s-apps to be running ...
	I1026 07:48:49.449981   14008 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 07:48:49.450033   14008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 07:48:49.584055   14008 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 07:48:49.584100   14008 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 07:48:49.647495   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:49.648380   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:49.755238   14008 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 07:48:49.755258   14008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1026 07:48:49.821997   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 07:48:49.831852   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:50.147792   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:50.151078   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:50.330668   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:50.649196   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:50.649231   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:50.831714   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:50.839972   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.044676847s)
	W1026 07:48:50.840028   14008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:50.840053   14008 retry.go:31] will retry after 355.203603ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:51.145953   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:51.146348   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:51.195695   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:51.334495   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:51.504568   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.8102346s)
	I1026 07:48:51.504609   14008 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.054552897s)
	I1026 07:48:51.504631   14008 system_svc.go:56] duration metric: took 2.054648015s WaitForService to wait for kubelet
	I1026 07:48:51.504641   14008 kubeadm.go:586] duration metric: took 11.652579444s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 07:48:51.504680   14008 node_conditions.go:102] verifying NodePressure condition ...
	I1026 07:48:51.560956   14008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 07:48:51.560983   14008 node_conditions.go:123] node cpu capacity is 2
	I1026 07:48:51.560995   14008 node_conditions.go:105] duration metric: took 56.310739ms to run NodePressure ...
	I1026 07:48:51.561006   14008 start.go:241] waiting for startup goroutines ...
	I1026 07:48:51.652688   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.830648724s)
	I1026 07:48:51.653676   14008 addons.go:479] Verifying addon gcp-auth=true in "addons-465751"
	I1026 07:48:51.655744   14008 out.go:179] * Verifying gcp-auth addon...
	I1026 07:48:51.657353   14008 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 07:48:51.693958   14008 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 07:48:51.693993   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:51.694043   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:51.694125   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:51.832394   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:52.146081   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:52.148150   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:52.162243   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:52.333282   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:52.647058   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:52.649510   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:52.748537   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:52.835527   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.639765016s)
	W1026 07:48:52.835570   14008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:52.835588   14008 retry.go:31] will retry after 651.470802ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:52.847583   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:53.145703   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:53.145816   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:53.161489   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:53.329176   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:53.487227   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:53.645559   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:53.647779   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:53.665970   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:53.828861   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:54.144312   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:54.144464   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:54.161097   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:54.260475   14008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:54.260512   14008 retry.go:31] will retry after 699.554862ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:54.328933   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:54.644886   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:54.646176   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:54.660849   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:54.831035   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:54.961032   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:55.147283   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:55.149663   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:55.160365   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:55.333634   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:55.646153   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:55.647536   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:55.663023   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:55.828725   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:56.147115   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:56.151292   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:56.163167   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:56.217382   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.25629831s)
	W1026 07:48:56.217421   14008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:56.217445   14008 retry.go:31] will retry after 1.257567383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:56.330951   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:56.646427   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:56.648439   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:56.663549   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:56.828548   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:57.146378   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:57.146631   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:57.161428   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:57.329564   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:57.475492   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:57.644005   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:57.649038   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:57.662238   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:57.832163   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:58.146131   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:58.146857   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:58.161189   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:58.328635   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:58.644332   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:58.646361   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:58.647702   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.172157583s)
	W1026 07:48:58.647743   14008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:58.647766   14008 retry.go:31] will retry after 1.86445412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:58.661298   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:58.830195   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:59.147632   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:59.147792   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:59.162255   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:59.330265   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:59.646139   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:59.646810   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:59.660245   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:59.830394   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:00.143723   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:00.146211   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:00.160546   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:00.337013   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:00.513188   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:49:00.645067   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:00.648886   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:00.662559   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:00.831745   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:01.147416   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:01.147715   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:01.165117   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:01.331190   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:01.645014   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:01.651158   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:01.660691   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:01.661936   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.148708928s)
	W1026 07:49:01.661974   14008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:01.661998   14008 retry.go:31] will retry after 3.173611608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:01.831183   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:02.145878   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:02.150488   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:02.160825   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:02.328199   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:02.646123   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:02.646218   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:02.664035   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:02.919747   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:03.147785   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:03.147981   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:03.161118   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:03.329564   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:03.645706   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:03.646934   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:03.660662   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:03.829924   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:04.143579   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:04.145580   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:04.162514   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:04.329240   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:04.644854   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:04.648241   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:04.662443   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:04.835977   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:49:04.998310   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:05.447497   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:05.448293   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:05.448556   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:05.451746   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:05.650277   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:05.650455   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:05.661239   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:05.829974   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:06.011372   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.175351196s)
	W1026 07:49:06.011422   14008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:06.011446   14008 retry.go:31] will retry after 4.999894282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:06.143775   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:06.144403   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:06.161747   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:06.334673   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:06.647141   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:06.647171   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:06.662386   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:06.828816   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:07.144561   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:07.145016   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:07.160621   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:07.329587   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:07.645139   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:07.645254   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:07.663509   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:07.829776   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:08.143546   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:08.144953   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:08.160505   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:08.328981   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:08.643035   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:08.644177   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:08.661070   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:08.828757   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:09.144753   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:09.145059   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:09.160962   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:09.328425   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:09.644372   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:09.644399   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:09.660403   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:09.831817   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:10.144808   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:10.144826   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:10.160702   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:10.329609   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:10.645351   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:10.645454   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:10.661380   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:10.830360   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:11.011509   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:49:11.145742   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:11.146504   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:11.163229   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:11.332204   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:11.647383   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:11.648519   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:11.661155   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:11.833542   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:12.151542   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:12.151719   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:12.160711   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:12.212675   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.201122616s)
	W1026 07:49:12.212722   14008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:12.212745   14008 retry.go:31] will retry after 6.289293379s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:12.330186   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:12.644303   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:12.645583   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:12.661076   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:12.831130   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:13.146047   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:13.146196   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:13.161793   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:13.329503   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:13.646035   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:13.646115   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:13.661404   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:13.829524   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:14.143009   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:14.144229   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:14.162165   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:14.330157   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:14.643812   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:14.644574   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:14.660177   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:14.831972   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:15.145277   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:15.145743   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:15.160447   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:15.329126   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:15.643425   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:15.644600   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:15.660254   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:15.831146   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:16.148430   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:16.150180   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:16.162319   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:16.329931   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:16.648411   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:16.651220   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:16.661075   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:16.829774   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:17.144600   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:17.146152   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:17.161579   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:17.329517   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:17.644442   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:17.644871   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:17.661563   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:17.830434   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:18.144819   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:18.146725   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:18.161123   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:18.328545   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:18.502772   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:49:18.644421   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:18.647771   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:18.662145   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:18.831384   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:19.148398   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:19.156460   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:19.163876   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:19.367705   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:19.640472   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.137657514s)
	W1026 07:49:19.640510   14008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:19.640529   14008 retry.go:31] will retry after 11.559691455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:19.648149   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:19.648845   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:19.663378   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:19.829139   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:20.144121   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:20.146949   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:20.162446   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:20.332706   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:20.647644   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:20.650779   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:20.663782   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:20.830482   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:21.146213   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:21.148491   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:21.162995   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:21.328296   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:21.852446   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:21.852608   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:21.853177   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:21.853605   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:22.147099   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:22.147189   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:22.160514   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:22.330052   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:22.644184   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:22.645163   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:22.660985   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:22.828288   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:23.146734   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:23.148977   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:23.160581   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:23.329735   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:23.644188   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:23.644999   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:23.660585   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:23.827972   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:24.144346   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:24.145309   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:24.160910   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:24.328940   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:24.648450   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:24.648604   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:24.659917   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:24.830416   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:25.150824   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:25.150965   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:25.161994   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:25.328456   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:25.646075   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:25.647080   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:25.660986   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:25.829289   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:26.146184   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:26.146545   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:26.160388   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:26.330766   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:26.643423   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:26.643671   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:26.660430   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:26.829605   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:27.143883   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:27.144735   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:27.160414   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:27.329141   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:27.643072   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:27.644123   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:27.660840   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:27.828241   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:28.144000   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:28.144553   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:28.160548   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:28.329234   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:28.644475   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:28.645012   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:28.662589   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:28.829575   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:29.150333   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:29.151352   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:29.163038   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:29.330121   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:29.648377   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:29.652120   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:29.662615   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:29.830778   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:30.146646   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:30.146882   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:30.162277   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:30.330258   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:30.647044   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:30.647985   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:30.662403   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:30.829262   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:31.144944   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:31.146860   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:31.162161   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:31.201275   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:49:31.330240   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:31.644465   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:31.645788   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:31.664006   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:31.829147   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:32.430799   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:32.436469   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:32.439692   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:32.439837   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:32.591637   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.390318501s)
	W1026 07:49:32.591679   14008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:32.591697   14008 retry.go:31] will retry after 15.730326228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:32.645768   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:32.649310   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:32.663328   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:32.828832   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:33.145154   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:33.146723   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:33.161208   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:33.331298   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:33.646080   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:33.646197   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:33.662864   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:33.830174   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:34.198661   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:34.198687   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:34.198734   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:34.338736   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:34.646606   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:34.649930   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:34.661005   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:34.829080   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:35.145306   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:35.145983   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:35.161783   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:35.330143   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:35.646783   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:35.647837   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:35.661546   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:35.829924   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:36.145326   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:36.146805   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:36.161310   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:36.330790   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:36.644683   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:36.645392   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:36.659756   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:36.829636   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:37.144783   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:37.145660   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:37.160330   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:37.329012   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:37.644708   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:37.644803   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:37.661621   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:37.829215   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:38.143492   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:38.144597   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:38.160828   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:38.328328   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:38.645430   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:38.646166   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:38.664921   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:38.829134   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:39.145895   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:39.145927   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:39.164541   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:39.329901   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:39.643346   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:39.646225   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:39.661697   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:39.831649   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:40.143768   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:40.144119   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:40.162728   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:40.330544   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:40.644527   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:40.644653   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:40.660429   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:40.828614   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:41.143730   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:41.144449   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:41.160221   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:41.328926   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:41.644718   14008 kapi.go:107] duration metric: took 54.00416485s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 07:49:41.645365   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:41.661836   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:41.827979   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:42.143173   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:42.160606   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:42.329114   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:42.646628   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:42.663479   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:42.831134   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:43.143880   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:43.163447   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:43.329704   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:43.644944   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:43.661267   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:43.829541   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:44.144537   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:44.159936   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:44.329151   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:44.652235   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:44.665949   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:44.829759   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:45.146572   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:45.163396   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:45.329445   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:45.645902   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:45.662178   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:45.834232   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:46.143394   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:46.163221   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:46.330097   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:46.643720   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:46.661098   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:46.830210   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:47.145286   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:47.161021   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:47.328813   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:47.643168   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:47.660688   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:47.830339   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:48.146306   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:48.163165   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:48.322494   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:49:48.331143   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:48.643948   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:48.661459   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:48.830880   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:49.146259   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:49.163177   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:49.335545   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:49.499354   14008 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.176818854s)
	W1026 07:49:49.499388   14008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:49.499404   14008 retry.go:31] will retry after 22.389658964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:49.645185   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:49.661181   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:49.840013   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:50.143557   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:50.160463   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:50.329825   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:50.644490   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:50.661196   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:50.833224   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:51.145778   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:51.161499   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:51.329873   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:51.647007   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:51.661881   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:51.827465   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:52.148421   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:52.161891   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:52.331256   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:52.643996   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:52.661180   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:52.833187   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:53.182197   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:53.187365   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:53.331640   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:53.644346   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:53.661053   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:53.836707   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:54.144621   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:54.162105   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:54.328904   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:54.646420   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:54.663518   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:54.840746   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:55.143647   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:55.160129   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:55.329231   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:55.643879   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:55.662198   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:55.829756   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:56.145169   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:56.162513   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:56.329192   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:56.644295   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:56.745568   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:56.845745   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:57.145632   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:57.160530   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:57.335387   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:57.649281   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:57.663040   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:57.829188   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:58.144868   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:58.160956   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:58.328426   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:58.647711   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:58.665149   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:58.831215   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:59.147310   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:59.161447   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:59.334176   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:59.647435   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:59.826290   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:59.832519   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:50:00.143948   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:00.160976   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:00.332203   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:50:00.650381   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:00.662221   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:00.830241   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:50:01.144299   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:01.162104   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:01.329346   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:50:01.648222   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:01.661462   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:01.832151   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:50:02.147389   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:02.165270   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:02.329324   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:50:02.645781   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:02.660385   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:02.830261   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:50:03.144759   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:03.160535   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:03.329134   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:50:03.645137   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:03.663712   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:03.828265   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:50:04.145927   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:04.162042   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:04.328004   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:50:04.644048   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:04.660513   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:04.828937   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:50:05.144362   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:05.161658   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:05.328730   14008 kapi.go:107] duration metric: took 1m16.00395683s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1026 07:50:05.644513   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:05.661322   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:06.143123   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:06.161707   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:06.643407   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:06.661947   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:07.144117   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:07.160591   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:07.644020   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:07.661544   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:08.143992   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:08.161341   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:08.644325   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:08.661519   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:09.143627   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:09.160472   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:09.642943   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:09.660958   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:10.144053   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:10.161426   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:10.644658   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:10.660676   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:11.144060   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:11.161293   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:11.644102   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:11.661096   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:11.889369   14008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:50:12.145071   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:12.160928   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:50:12.635754   14008 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 07:50:12.635896   14008 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1026 07:50:12.643990   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:12.661306   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:13.144510   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:13.160569   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:13.644073   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:13.660896   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:14.143896   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:14.160829   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:14.645295   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:14.662244   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:15.144033   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:15.160700   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:15.643781   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:15.660663   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:16.143321   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:16.161048   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:16.644104   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:16.660657   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:17.144472   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:17.160733   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:17.643734   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:17.660446   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:18.143224   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:18.161117   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:18.644391   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:18.661950   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:19.143966   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:19.160986   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:19.643610   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:19.661510   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:20.143528   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:20.160365   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:20.645153   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:20.661038   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:21.144577   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:21.160518   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:21.643251   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:21.660995   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:22.143775   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:22.160650   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:22.643903   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:22.660440   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:23.144940   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:23.160642   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:23.643684   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:23.660621   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:24.143255   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:24.160774   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:24.643161   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:24.661123   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:25.144223   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:25.160738   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:25.644252   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:25.660751   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:26.143583   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:26.160494   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:26.643146   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:26.660931   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:27.144480   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:27.160472   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:27.642883   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:27.660769   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:28.145429   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:28.161504   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:28.643348   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:28.660952   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:29.144574   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:29.160224   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:29.644121   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:29.660845   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:30.143929   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:30.160523   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:30.643867   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:30.660600   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:31.143804   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:31.160786   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:31.643797   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:31.660788   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:32.143917   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:32.160968   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:32.642956   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:32.660582   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:33.295304   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:33.296145   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:33.644135   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:33.660626   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:34.143531   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:34.160330   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:34.644272   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:34.660814   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:35.144228   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:35.161500   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:35.643810   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:35.661186   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:36.146371   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:36.161654   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:36.644162   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:36.661060   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:37.143836   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:37.160997   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:37.643666   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:37.660395   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:38.144401   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:38.160777   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:38.643503   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:38.661129   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:39.144414   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:39.161533   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:39.643865   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:39.660625   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:40.143715   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:40.160177   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:40.644737   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:40.660496   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:41.143842   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:41.160487   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:41.644837   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:41.660869   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:42.144766   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:42.161519   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:42.643246   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:42.661232   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:43.144139   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:43.160807   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:43.644046   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:43.660551   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:44.144137   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:44.160723   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:44.643407   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:44.661336   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:45.143830   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:45.160717   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:45.643816   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:45.661147   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:46.143828   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:46.160778   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:46.643746   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:46.660575   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:47.144149   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:47.160897   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:47.644332   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:47.661288   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:48.144710   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:48.160768   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:48.644758   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:48.660478   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:49.143505   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:49.160026   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:49.644172   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:49.661190   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:50.144182   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:50.160868   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:50.643759   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:50.660768   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:51.144082   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:51.161114   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:51.644260   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:51.661240   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:52.144831   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:52.160255   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:52.644379   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:52.661419   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:53.144508   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:53.159766   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:53.644907   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:53.661077   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:54.144274   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:54.161833   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:54.643835   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:54.660825   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:55.143925   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:55.160529   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:55.645114   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:55.661216   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:56.144531   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:56.160222   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:56.644184   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:56.661145   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:57.144461   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:57.161304   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:57.644191   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:57.661364   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:58.144047   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:58.160815   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:58.644005   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:58.660381   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:59.143450   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:59.161392   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:50:59.645039   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:50:59.661113   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:00.144227   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:00.160689   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:00.644004   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:00.660528   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:01.143415   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:01.160942   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:01.644175   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:01.660875   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:02.144485   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:02.160022   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:02.644770   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:02.660184   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:03.143803   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:03.160700   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:03.643742   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:03.660844   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:04.144466   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:04.160256   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:04.643959   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:04.661156   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:05.144143   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:05.160733   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:05.644963   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:05.661322   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:06.144803   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:06.160487   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:06.645501   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:06.660721   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:07.145998   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:07.163248   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:07.646748   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:07.660890   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:08.143907   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:08.161670   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:08.645474   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:08.660347   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:09.145131   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:09.163224   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:09.646274   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:09.662038   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:10.144268   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:10.161350   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:10.645457   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:10.662192   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:11.143824   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:11.161031   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:11.644007   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:11.662574   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:12.144384   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:12.161535   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:12.643416   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:12.661228   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:13.145129   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:13.162895   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:13.644165   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:13.660678   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:14.143947   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:14.163241   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:14.646056   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:14.664075   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:15.146077   14008 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:51:15.161390   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:15.647179   14008 kapi.go:107] duration metric: took 2m28.007333804s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 07:51:15.747484   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:16.161103   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:16.659683   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:17.161349   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:17.664859   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:18.164320   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:18.666271   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:19.161838   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:19.663427   14008 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:51:20.161752   14008 kapi.go:107] duration metric: took 2m28.504395093s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 07:51:20.163252   14008 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-465751 cluster.
	I1026 07:51:20.164374   14008 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 07:51:20.165445   14008 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 07:51:20.166663   14008 out.go:179] * Enabled addons: storage-provisioner, registry-creds, ingress-dns, cloud-spanner, amd-gpu-device-plugin, default-storageclass, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1026 07:51:20.167825   14008 addons.go:514] duration metric: took 2m40.31575822s for enable addons: enabled=[storage-provisioner registry-creds ingress-dns cloud-spanner amd-gpu-device-plugin default-storageclass nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1026 07:51:20.167867   14008 start.go:246] waiting for cluster config update ...
	I1026 07:51:20.167891   14008 start.go:255] writing updated cluster config ...
	I1026 07:51:20.168151   14008 ssh_runner.go:195] Run: rm -f paused
	I1026 07:51:20.175186   14008 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 07:51:20.179405   14008 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kbfd9" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:51:20.184782   14008 pod_ready.go:94] pod "coredns-66bc5c9577-kbfd9" is "Ready"
	I1026 07:51:20.184809   14008 pod_ready.go:86] duration metric: took 5.383983ms for pod "coredns-66bc5c9577-kbfd9" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:51:20.186871   14008 pod_ready.go:83] waiting for pod "etcd-addons-465751" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:51:20.190903   14008 pod_ready.go:94] pod "etcd-addons-465751" is "Ready"
	I1026 07:51:20.190929   14008 pod_ready.go:86] duration metric: took 4.033208ms for pod "etcd-addons-465751" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:51:20.193037   14008 pod_ready.go:83] waiting for pod "kube-apiserver-addons-465751" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:51:20.197873   14008 pod_ready.go:94] pod "kube-apiserver-addons-465751" is "Ready"
	I1026 07:51:20.197893   14008 pod_ready.go:86] duration metric: took 4.834435ms for pod "kube-apiserver-addons-465751" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:51:20.200075   14008 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-465751" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:51:20.579544   14008 pod_ready.go:94] pod "kube-controller-manager-addons-465751" is "Ready"
	I1026 07:51:20.579568   14008 pod_ready.go:86] duration metric: took 379.465647ms for pod "kube-controller-manager-addons-465751" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:51:20.780515   14008 pod_ready.go:83] waiting for pod "kube-proxy-jfndh" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:51:21.178573   14008 pod_ready.go:94] pod "kube-proxy-jfndh" is "Ready"
	I1026 07:51:21.178601   14008 pod_ready.go:86] duration metric: took 398.060054ms for pod "kube-proxy-jfndh" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:51:21.379038   14008 pod_ready.go:83] waiting for pod "kube-scheduler-addons-465751" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:51:21.779229   14008 pod_ready.go:94] pod "kube-scheduler-addons-465751" is "Ready"
	I1026 07:51:21.779261   14008 pod_ready.go:86] duration metric: took 400.195709ms for pod "kube-scheduler-addons-465751" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:51:21.779275   14008 pod_ready.go:40] duration metric: took 1.604062657s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 07:51:21.823887   14008 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 07:51:21.825521   14008 out.go:179] * Done! kubectl is now configured to use "addons-465751" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.748705465Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761465274748680188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01c63e25-0bca-442e-83b6-9cc24bc3e002 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.749382825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3614039-af65-42a7-8262-8484338a72e7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.749620569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3614039-af65-42a7-8262-8484338a72e7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.749960343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ce1df9cd71f36491195368428833aa12d2cf56dfc1fbdc5d6111f183bc1164b,PodSandboxId:b622e64d687649852550954737254d2465ec25d796afc5700b8bd9f77cf3fb22,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761465132037578013,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f70c607-ec3d-4882-bc7f-844468c63e6f,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb09f50eaaae62057b0d3dbe8e843156c75048f9de36ffcb29206f0991c9b998,PodSandboxId:e053a958bfef22f4089ba6a42f44f18af2c8fa9ccf052583b0b2524fc1fb5032,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761465086131963666,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee16220a-f0b1-46cf-a6ce-6883375c22fb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1c35d33542f73cfb2af23f493adf57f28bf77e83121bfd614eabdadd92c9bb,PodSandboxId:5555c6439548b42fb733b10e1c453fee489b35e2305f53714474b4e51881cb2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761465075199228668,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-qtzss,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cf7a8ce1-6817-4337-8c09-3c58f7c2a38f,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1e73ac1615fb7a409de4214673542b1b97126f20d3408c6a6ee9a2c5b694d6bb,PodSandboxId:70f8bda30e726c5562da7168118789ead0951fa99518defd3c0d48acf7d6331c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1761464991740981615,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b2ss5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 238e7067-5ff1-4815-b618-7ac7514f8521,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa8a9995c28f44c291a8748431364b7af145e03242dfa564f09cd3c7e8674b5,PodSandboxId:e9f7031f3614b29ae3dcdd9c43081786dfc2c8edb31d9b593a2ff1f0c0589aa2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761464991614290115,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bxcsh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c25b9e6d-3037-4bbd-b010-d9a25edc9fac,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858c20ff66d8fb8afec0076896d0213fbc9a2b156859ec511a684297488c313d,PodSandboxId:3f3538a2f6baa43f8fec7ca32d6dae1a1d0d4762601268be0bd1e13ee84c1fbd,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761464988093746985,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-54x6r,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 38ebc718-5c82-48ab-9c88-866b4144c69c,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dfca2fe0fac4eb4f8f562fe768d4ac8881602d2e67c75a9d828a7e88911e13e,PodSandboxId:98bf0d9fb65557c3984523979dba6385e286de42a25ea8e71540f1f0938f7881,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761464976251273297,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9d177cf-2ef7-4e93-bb7f-9a690e8482c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102703e69f0039a7d2188bd80b1e07914e933b4d9b928ea1d0e390f0d61c8804,PodSandboxId:4ae37ad7fc621ac2a681179018eb93c753d6dfe077e3bbc1bbf2d272a04cce8d,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761464930687700828,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bn844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d08c1ce9-d708-42b0-9733-b1fc34e50760,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04be386c27499f20abd49cfbb426cac0e7dc0c61e8c2325a71ae59db755626ba,PodSandboxId:9f016c5613f37a5f22ae109bf1055bf712e0868bc612139efc4f902a8f55d01e,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761464930223268501,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e909bb90-9eac-4d89-bb56-bf518cc23c65,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b74da9da46dee0b6f9f012095d6c7e38d00ef29b88ea687c2e8353b81b860ceb,PodSandboxId:869ddfaf5f2f2c48fd4f5a1871daf5dd41f93f81824beacb489ef4d0770e71eb,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761464921051892899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kbfd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c915e111-5241-43dd-9f41-4920ebfae2dc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74df49c7e9a82f8b06e4c28154546c647740bd1c20751b5b8869acc3d7e4c434,PodSandboxId:920e66682b86bbdd7ee8bce58d6f8910e7acbe01be2c81307b11e4d023a34c30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761464920198898336,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfndh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5565ff2a-a1fd-4447-b5ce-1bb3343e6cc5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:facb7370484d275d776da44dab41815765124c7c54b7aeb6073790b5474d181d,PodSandboxId:0c29aa963bf51b1f63182b299bd3a390b9782967ddfbbfdc9d8506f2533d9f62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761464908614822085,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c456c27dc216b95a45d561a92b01e11,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPo
rt\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97033e19f35ac749b36cf25423c2a4736d39f2ebad370df898b7fedbc585af57,PodSandboxId:57e1b1aafd0a272b4fff193fa52659f1a1385216930766bd76dbe458510e1f69,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761464908600517781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d48382ad57675183aaf2a2f7719064d,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69406b443199d152fd622120ff2bc2bda04ddc649c8543ec2230e3dababbf816,PodSandboxId:9bee8334b5279fed0947207fbe7054f1a681a195e131e8b92238b18742efcd65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761464908574122513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 6992518ddef4ef1cfb4ae2d5cf3c5bff,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfe6790934e696db48f78e8ab9c20df62e9f88919d8a5a803105137d47b9ff6,PodSandboxId:b68e21a90dfdcb86818eb09216d500069f3d9ac7d2c3740b3d65b884fab8e73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761464908552016208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserv
er,io.kubernetes.pod.name: kube-apiserver-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693961bdfae2a5e25c7fc742f7d1470b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3614039-af65-42a7-8262-8484338a72e7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.794939505Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4095e439-6ee0-41af-9046-4d9e165d37b6 name=/runtime.v1.RuntimeService/Version
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.795116591Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4095e439-6ee0-41af-9046-4d9e165d37b6 name=/runtime.v1.RuntimeService/Version
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.796844334Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34fe36d4-e607-405f-a1af-1ac4832d1395 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.799329695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761465274799215106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34fe36d4-e607-405f-a1af-1ac4832d1395 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.800535260Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa2a6c6b-044f-4ca1-87a8-b972c93ed44b name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.800616642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa2a6c6b-044f-4ca1-87a8-b972c93ed44b name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.801431088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ce1df9cd71f36491195368428833aa12d2cf56dfc1fbdc5d6111f183bc1164b,PodSandboxId:b622e64d687649852550954737254d2465ec25d796afc5700b8bd9f77cf3fb22,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761465132037578013,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f70c607-ec3d-4882-bc7f-844468c63e6f,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb09f50eaaae62057b0d3dbe8e843156c75048f9de36ffcb29206f0991c9b998,PodSandboxId:e053a958bfef22f4089ba6a42f44f18af2c8fa9ccf052583b0b2524fc1fb5032,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761465086131963666,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee16220a-f0b1-46cf-a6ce-6883375c22fb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1c35d33542f73cfb2af23f493adf57f28bf77e83121bfd614eabdadd92c9bb,PodSandboxId:5555c6439548b42fb733b10e1c453fee489b35e2305f53714474b4e51881cb2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761465075199228668,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-qtzss,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cf7a8ce1-6817-4337-8c09-3c58f7c2a38f,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1e73ac1615fb7a409de4214673542b1b97126f20d3408c6a6ee9a2c5b694d6bb,PodSandboxId:70f8bda30e726c5562da7168118789ead0951fa99518defd3c0d48acf7d6331c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1761464991740981615,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b2ss5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 238e7067-5ff1-4815-b618-7ac7514f8521,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa8a9995c28f44c291a8748431364b7af145e03242dfa564f09cd3c7e8674b5,PodSandboxId:e9f7031f3614b29ae3dcdd9c43081786dfc2c8edb31d9b593a2ff1f0c0589aa2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761464991614290115,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bxcsh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c25b9e6d-3037-4bbd-b010-d9a25edc9fac,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858c20ff66d8fb8afec0076896d0213fbc9a2b156859ec511a684297488c313d,PodSandboxId:3f3538a2f6baa43f8fec7ca32d6dae1a1d0d4762601268be0bd1e13ee84c1fbd,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761464988093746985,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-54x6r,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 38ebc718-5c82-48ab-9c88-866b4144c69c,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dfca2fe0fac4eb4f8f562fe768d4ac8881602d2e67c75a9d828a7e88911e13e,PodSandboxId:98bf0d9fb65557c3984523979dba6385e286de42a25ea8e71540f1f0938f7881,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761464976251273297,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9d177cf-2ef7-4e93-bb7f-9a690e8482c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102703e69f0039a7d2188bd80b1e07914e933b4d9b928ea1d0e390f0d61c8804,PodSandboxId:4ae37ad7fc621ac2a681179018eb93c753d6dfe077e3bbc1bbf2d272a04cce8d,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761464930687700828,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bn844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d08c1ce9-d708-42b0-9733-b1fc34e50760,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04be386c27499f20abd49cfbb426cac0e7dc0c61e8c2325a71ae59db755626ba,PodSandboxId:9f016c5613f37a5f22ae109bf1055bf712e0868bc612139efc4f902a8f55d01e,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761464930223268501,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e909bb90-9eac-4d89-bb56-bf518cc23c65,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b74da9da46dee0b6f9f012095d6c7e38d00ef29b88ea687c2e8353b81b860ceb,PodSandboxId:869ddfaf5f2f2c48fd4f5a1871daf5dd41f93f81824beacb489ef4d0770e71eb,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761464921051892899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kbfd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c915e111-5241-43dd-9f41-4920ebfae2dc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74df49c7e9a82f8b06e4c28154546c647740bd1c20751b5b8869acc3d7e4c434,PodSandboxId:920e66682b86bbdd7ee8bce58d6f8910e7acbe01be2c81307b11e4d023a34c30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761464920198898336,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfndh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5565ff2a-a1fd-4447-b5ce-1bb3343e6cc5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:facb7370484d275d776da44dab41815765124c7c54b7aeb6073790b5474d181d,PodSandboxId:0c29aa963bf51b1f63182b299bd3a390b9782967ddfbbfdc9d8506f2533d9f62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761464908614822085,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c456c27dc216b95a45d561a92b01e11,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPo
rt\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97033e19f35ac749b36cf25423c2a4736d39f2ebad370df898b7fedbc585af57,PodSandboxId:57e1b1aafd0a272b4fff193fa52659f1a1385216930766bd76dbe458510e1f69,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761464908600517781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d48382ad57675183aaf2a2f7719064d,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69406b443199d152fd622120ff2bc2bda04ddc649c8543ec2230e3dababbf816,PodSandboxId:9bee8334b5279fed0947207fbe7054f1a681a195e131e8b92238b18742efcd65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761464908574122513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 6992518ddef4ef1cfb4ae2d5cf3c5bff,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfe6790934e696db48f78e8ab9c20df62e9f88919d8a5a803105137d47b9ff6,PodSandboxId:b68e21a90dfdcb86818eb09216d500069f3d9ac7d2c3740b3d65b884fab8e73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761464908552016208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserv
er,io.kubernetes.pod.name: kube-apiserver-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693961bdfae2a5e25c7fc742f7d1470b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa2a6c6b-044f-4ca1-87a8-b972c93ed44b name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.840094178Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb2dafa1-efdd-4a4a-bc4a-072dcccd47f9 name=/runtime.v1.RuntimeService/Version
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.840355805Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb2dafa1-efdd-4a4a-bc4a-072dcccd47f9 name=/runtime.v1.RuntimeService/Version
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.841411816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87d946d6-d8d0-46b4-9242-1ce58752bfb0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.842647104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761465274842618575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87d946d6-d8d0-46b4-9242-1ce58752bfb0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.843160452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19b081c1-15cc-4ce0-8f9d-80bf7a1e11d7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.843281674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19b081c1-15cc-4ce0-8f9d-80bf7a1e11d7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.844424007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ce1df9cd71f36491195368428833aa12d2cf56dfc1fbdc5d6111f183bc1164b,PodSandboxId:b622e64d687649852550954737254d2465ec25d796afc5700b8bd9f77cf3fb22,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761465132037578013,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f70c607-ec3d-4882-bc7f-844468c63e6f,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb09f50eaaae62057b0d3dbe8e843156c75048f9de36ffcb29206f0991c9b998,PodSandboxId:e053a958bfef22f4089ba6a42f44f18af2c8fa9ccf052583b0b2524fc1fb5032,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761465086131963666,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee16220a-f0b1-46cf-a6ce-6883375c22fb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1c35d33542f73cfb2af23f493adf57f28bf77e83121bfd614eabdadd92c9bb,PodSandboxId:5555c6439548b42fb733b10e1c453fee489b35e2305f53714474b4e51881cb2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761465075199228668,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-qtzss,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cf7a8ce1-6817-4337-8c09-3c58f7c2a38f,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1e73ac1615fb7a409de4214673542b1b97126f20d3408c6a6ee9a2c5b694d6bb,PodSandboxId:70f8bda30e726c5562da7168118789ead0951fa99518defd3c0d48acf7d6331c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1761464991740981615,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b2ss5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 238e7067-5ff1-4815-b618-7ac7514f8521,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa8a9995c28f44c291a8748431364b7af145e03242dfa564f09cd3c7e8674b5,PodSandboxId:e9f7031f3614b29ae3dcdd9c43081786dfc2c8edb31d9b593a2ff1f0c0589aa2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761464991614290115,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bxcsh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c25b9e6d-3037-4bbd-b010-d9a25edc9fac,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858c20ff66d8fb8afec0076896d0213fbc9a2b156859ec511a684297488c313d,PodSandboxId:3f3538a2f6baa43f8fec7ca32d6dae1a1d0d4762601268be0bd1e13ee84c1fbd,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761464988093746985,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-54x6r,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 38ebc718-5c82-48ab-9c88-866b4144c69c,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dfca2fe0fac4eb4f8f562fe768d4ac8881602d2e67c75a9d828a7e88911e13e,PodSandboxId:98bf0d9fb65557c3984523979dba6385e286de42a25ea8e71540f1f0938f7881,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761464976251273297,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9d177cf-2ef7-4e93-bb7f-9a690e8482c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102703e69f0039a7d2188bd80b1e07914e933b4d9b928ea1d0e390f0d61c8804,PodSandboxId:4ae37ad7fc621ac2a681179018eb93c753d6dfe077e3bbc1bbf2d272a04cce8d,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761464930687700828,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bn844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d08c1ce9-d708-42b0-9733-b1fc34e50760,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04be386c27499f20abd49cfbb426cac0e7dc0c61e8c2325a71ae59db755626ba,PodSandboxId:9f016c5613f37a5f22ae109bf1055bf712e0868bc612139efc4f902a8f55d01e,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761464930223268501,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e909bb90-9eac-4d89-bb56-bf518cc23c65,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b74da9da46dee0b6f9f012095d6c7e38d00ef29b88ea687c2e8353b81b860ceb,PodSandboxId:869ddfaf5f2f2c48fd4f5a1871daf5dd41f93f81824beacb489ef4d0770e71eb,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761464921051892899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kbfd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c915e111-5241-43dd-9f41-4920ebfae2dc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74df49c7e9a82f8b06e4c28154546c647740bd1c20751b5b8869acc3d7e4c434,PodSandboxId:920e66682b86bbdd7ee8bce58d6f8910e7acbe01be2c81307b11e4d023a34c30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761464920198898336,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfndh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5565ff2a-a1fd-4447-b5ce-1bb3343e6cc5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:facb7370484d275d776da44dab41815765124c7c54b7aeb6073790b5474d181d,PodSandboxId:0c29aa963bf51b1f63182b299bd3a390b9782967ddfbbfdc9d8506f2533d9f62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761464908614822085,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c456c27dc216b95a45d561a92b01e11,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPo
rt\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97033e19f35ac749b36cf25423c2a4736d39f2ebad370df898b7fedbc585af57,PodSandboxId:57e1b1aafd0a272b4fff193fa52659f1a1385216930766bd76dbe458510e1f69,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761464908600517781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d48382ad57675183aaf2a2f7719064d,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69406b443199d152fd622120ff2bc2bda04ddc649c8543ec2230e3dababbf816,PodSandboxId:9bee8334b5279fed0947207fbe7054f1a681a195e131e8b92238b18742efcd65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761464908574122513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 6992518ddef4ef1cfb4ae2d5cf3c5bff,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfe6790934e696db48f78e8ab9c20df62e9f88919d8a5a803105137d47b9ff6,PodSandboxId:b68e21a90dfdcb86818eb09216d500069f3d9ac7d2c3740b3d65b884fab8e73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761464908552016208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserv
er,io.kubernetes.pod.name: kube-apiserver-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693961bdfae2a5e25c7fc742f7d1470b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19b081c1-15cc-4ce0-8f9d-80bf7a1e11d7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.878889591Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55b386ab-4f35-411a-a3d5-936c84b00f5a name=/runtime.v1.RuntimeService/Version
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.878975294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55b386ab-4f35-411a-a3d5-936c84b00f5a name=/runtime.v1.RuntimeService/Version
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.880780683Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3df69fe3-6801-4d69-aa30-ae0bc12a7eca name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.883358877Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761465274883293559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3df69fe3-6801-4d69-aa30-ae0bc12a7eca name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.884362903Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99c1bcc8-9158-45e1-a63b-098025966c38 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.884599322Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99c1bcc8-9158-45e1-a63b-098025966c38 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 07:54:34 addons-465751 crio[818]: time="2025-10-26 07:54:34.885054382Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ce1df9cd71f36491195368428833aa12d2cf56dfc1fbdc5d6111f183bc1164b,PodSandboxId:b622e64d687649852550954737254d2465ec25d796afc5700b8bd9f77cf3fb22,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761465132037578013,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f70c607-ec3d-4882-bc7f-844468c63e6f,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb09f50eaaae62057b0d3dbe8e843156c75048f9de36ffcb29206f0991c9b998,PodSandboxId:e053a958bfef22f4089ba6a42f44f18af2c8fa9ccf052583b0b2524fc1fb5032,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761465086131963666,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee16220a-f0b1-46cf-a6ce-6883375c22fb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c1c35d33542f73cfb2af23f493adf57f28bf77e83121bfd614eabdadd92c9bb,PodSandboxId:5555c6439548b42fb733b10e1c453fee489b35e2305f53714474b4e51881cb2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761465075199228668,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-qtzss,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cf7a8ce1-6817-4337-8c09-3c58f7c2a38f,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1e73ac1615fb7a409de4214673542b1b97126f20d3408c6a6ee9a2c5b694d6bb,PodSandboxId:70f8bda30e726c5562da7168118789ead0951fa99518defd3c0d48acf7d6331c,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1761464991740981615,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b2ss5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 238e7067-5ff1-4815-b618-7ac7514f8521,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa8a9995c28f44c291a8748431364b7af145e03242dfa564f09cd3c7e8674b5,PodSandboxId:e9f7031f3614b29ae3dcdd9c43081786dfc2c8edb31d9b593a2ff1f0c0589aa2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761464991614290115,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bxcsh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c25b9e6d-3037-4bbd-b010-d9a25edc9fac,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858c20ff66d8fb8afec0076896d0213fbc9a2b156859ec511a684297488c313d,PodSandboxId:3f3538a2f6baa43f8fec7ca32d6dae1a1d0d4762601268be0bd1e13ee84c1fbd,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761464988093746985,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-54x6r,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 38ebc718-5c82-48ab-9c88-866b4144c69c,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dfca2fe0fac4eb4f8f562fe768d4ac8881602d2e67c75a9d828a7e88911e13e,PodSandboxId:98bf0d9fb65557c3984523979dba6385e286de42a25ea8e71540f1f0938f7881,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761464976251273297,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9d177cf-2ef7-4e93-bb7f-9a690e8482c3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102703e69f0039a7d2188bd80b1e07914e933b4d9b928ea1d0e390f0d61c8804,PodSandboxId:4ae37ad7fc621ac2a681179018eb93c753d6dfe077e3bbc1bbf2d272a04cce8d,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761464930687700828,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bn844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d08c1ce9-d708-42b0-9733-b1fc34e50760,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04be386c27499f20abd49cfbb426cac0e7dc0c61e8c2325a71ae59db755626ba,PodSandboxId:9f016c5613f37a5f22ae109bf1055bf712e0868bc612139efc4f902a8f55d01e,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761464930223268501,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e909bb90-9eac-4d89-bb56-bf518cc23c65,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b74da9da46dee0b6f9f012095d6c7e38d00ef29b88ea687c2e8353b81b860ceb,PodSandboxId:869ddfaf5f2f2c48fd4f5a1871daf5dd41f93f81824beacb489ef4d0770e71eb,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761464921051892899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kbfd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c915e111-5241-43dd-9f41-4920ebfae2dc,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74df49c7e9a82f8b06e4c28154546c647740bd1c20751b5b8869acc3d7e4c434,PodSandboxId:920e66682b86bbdd7ee8bce58d6f8910e7acbe01be2c81307b11e4d023a34c30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761464920198898336,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jfndh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5565ff2a-a1fd-4447-b5ce-1bb3343e6cc5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:facb7370484d275d776da44dab41815765124c7c54b7aeb6073790b5474d181d,PodSandboxId:0c29aa963bf51b1f63182b299bd3a390b9782967ddfbbfdc9d8506f2533d9f62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761464908614822085,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c456c27dc216b95a45d561a92b01e11,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPo
rt\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97033e19f35ac749b36cf25423c2a4736d39f2ebad370df898b7fedbc585af57,PodSandboxId:57e1b1aafd0a272b4fff193fa52659f1a1385216930766bd76dbe458510e1f69,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761464908600517781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d48382ad57675183aaf2a2f7719064d,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69406b443199d152fd622120ff2bc2bda04ddc649c8543ec2230e3dababbf816,PodSandboxId:9bee8334b5279fed0947207fbe7054f1a681a195e131e8b92238b18742efcd65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761464908574122513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 6992518ddef4ef1cfb4ae2d5cf3c5bff,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfe6790934e696db48f78e8ab9c20df62e9f88919d8a5a803105137d47b9ff6,PodSandboxId:b68e21a90dfdcb86818eb09216d500069f3d9ac7d2c3740b3d65b884fab8e73c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761464908552016208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserv
er,io.kubernetes.pod.name: kube-apiserver-addons-465751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693961bdfae2a5e25c7fc742f7d1470b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99c1bcc8-9158-45e1-a63b-098025966c38 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5ce1df9cd71f3       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago       Running             nginx                     0                   b622e64d68764       nginx
	eb09f50eaaae6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   e053a958bfef2       busybox
	4c1c35d33542f       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago       Running             controller                0                   5555c6439548b       ingress-nginx-controller-675c5ddd98-qtzss
	1e73ac1615fb7       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                             4 minutes ago       Exited              patch                     1                   70f8bda30e726       ingress-nginx-admission-patch-b2ss5
	7fa8a9995c28f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              create                    0                   e9f7031f3614b       ingress-nginx-admission-create-bxcsh
	858c20ff66d8f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   3f3538a2f6baa       gadget-54x6r
	9dfca2fe0fac4       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   98bf0d9fb6555       kube-ingress-dns-minikube
	102703e69f003       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   4ae37ad7fc621       amd-gpu-device-plugin-bn844
	04be386c27499       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   9f016c5613f37       storage-provisioner
	b74da9da46dee       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   869ddfaf5f2f2       coredns-66bc5c9577-kbfd9
	74df49c7e9a82       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   920e66682b86b       kube-proxy-jfndh
	facb7370484d2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             6 minutes ago       Running             kube-scheduler            0                   0c29aa963bf51       kube-scheduler-addons-465751
	97033e19f35ac       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             6 minutes ago       Running             kube-controller-manager   0                   57e1b1aafd0a2       kube-controller-manager-addons-465751
	69406b443199d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             6 minutes ago       Running             etcd                      0                   9bee8334b5279       etcd-addons-465751
	fcfe6790934e6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             6 minutes ago       Running             kube-apiserver            0                   b68e21a90dfdc       kube-apiserver-addons-465751
	
	
	==> coredns [b74da9da46dee0b6f9f012095d6c7e38d00ef29b88ea687c2e8353b81b860ceb] <==
	[INFO] 10.244.0.7:42486 - 44860 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000837575s
	[INFO] 10.244.0.7:42486 - 3149 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000126604s
	[INFO] 10.244.0.7:42486 - 18520 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000120328s
	[INFO] 10.244.0.7:42486 - 9408 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000120583s
	[INFO] 10.244.0.7:42486 - 35329 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00018953s
	[INFO] 10.244.0.7:42486 - 48963 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00025741s
	[INFO] 10.244.0.7:42486 - 28497 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000086309s
	[INFO] 10.244.0.7:34600 - 4758 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000136049s
	[INFO] 10.244.0.7:34600 - 5092 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000126884s
	[INFO] 10.244.0.7:54510 - 1081 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119564s
	[INFO] 10.244.0.7:54510 - 856 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000144829s
	[INFO] 10.244.0.7:36002 - 28129 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073179s
	[INFO] 10.244.0.7:36002 - 27694 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000161688s
	[INFO] 10.244.0.7:55341 - 7798 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00027259s
	[INFO] 10.244.0.7:55341 - 8006 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000464297s
	[INFO] 10.244.0.23:57367 - 63903 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000504214s
	[INFO] 10.244.0.23:36500 - 26718 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000974327s
	[INFO] 10.244.0.23:47064 - 3913 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164452s
	[INFO] 10.244.0.23:41969 - 25299 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000395611s
	[INFO] 10.244.0.23:39921 - 47286 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086618s
	[INFO] 10.244.0.23:46886 - 25737 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000070313s
	[INFO] 10.244.0.23:42549 - 18390 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.001332755s
	[INFO] 10.244.0.23:57630 - 1778 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001381914s
	[INFO] 10.244.0.28:33418 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000529193s
	[INFO] 10.244.0.28:38276 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000431743s
	
	
	==> describe nodes <==
	Name:               addons-465751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-465751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=addons-465751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T07_48_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-465751
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 07:48:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-465751
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 07:54:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 07:52:41 +0000   Sun, 26 Oct 2025 07:48:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 07:52:41 +0000   Sun, 26 Oct 2025 07:48:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 07:52:41 +0000   Sun, 26 Oct 2025 07:48:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 07:52:41 +0000   Sun, 26 Oct 2025 07:48:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    addons-465751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 4fe18ab73dd94cb787d01cbb1d006e14
	  System UUID:                4fe18ab7-3dd9-4cb7-87d0-1cbb1d006e14
	  Boot ID:                    3a79571a-e3fb-4550-89f7-b21aa941be98
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	  default                     hello-world-app-5d498dc89-7l8xz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-54x6r                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-qtzss    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m48s
	  kube-system                 amd-gpu-device-plugin-bn844                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 coredns-66bc5c9577-kbfd9                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m56s
	  kube-system                 etcd-addons-465751                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m3s
	  kube-system                 kube-apiserver-addons-465751                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-controller-manager-addons-465751        200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 kube-proxy-jfndh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 kube-scheduler-addons-465751                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m8s (x8 over 6m8s)  kubelet          Node addons-465751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s (x8 over 6m8s)  kubelet          Node addons-465751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s (x7 over 6m8s)  kubelet          Node addons-465751 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m                   kubelet          Node addons-465751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m                   kubelet          Node addons-465751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m                   kubelet          Node addons-465751 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m                   kubelet          Node addons-465751 status is now: NodeReady
	  Normal  RegisteredNode           5m57s                node-controller  Node addons-465751 event: Registered Node addons-465751 in Controller
	
	
	==> dmesg <==
	[Oct26 07:49] kauditd_printk_skb: 374 callbacks suppressed
	[  +6.806156] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.502590] kauditd_printk_skb: 44 callbacks suppressed
	[ +12.032311] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.242806] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.595907] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.177526] kauditd_printk_skb: 65 callbacks suppressed
	[  +5.130144] kauditd_printk_skb: 136 callbacks suppressed
	[Oct26 07:50] kauditd_printk_skb: 75 callbacks suppressed
	[Oct26 07:51] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000047] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.622850] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.000038] kauditd_printk_skb: 17 callbacks suppressed
	[ +12.951229] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000066] kauditd_printk_skb: 22 callbacks suppressed
	[  +1.165893] kauditd_printk_skb: 107 callbacks suppressed
	[  +2.881235] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.080425] kauditd_printk_skb: 71 callbacks suppressed
	[Oct26 07:52] kauditd_printk_skb: 179 callbacks suppressed
	[  +3.357127] kauditd_printk_skb: 94 callbacks suppressed
	[  +2.788270] kauditd_printk_skb: 43 callbacks suppressed
	[  +4.040592] kauditd_printk_skb: 37 callbacks suppressed
	[  +0.000253] kauditd_printk_skb: 30 callbacks suppressed
	[  +6.855438] kauditd_printk_skb: 41 callbacks suppressed
	[Oct26 07:54] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [69406b443199d152fd622120ff2bc2bda04ddc649c8543ec2230e3dababbf816] <==
	{"level":"info","ts":"2025-10-26T07:50:33.287617Z","caller":"traceutil/trace.go:172","msg":"trace[325708638] transaction","detail":"{read_only:false; response_revision:1221; number_of_response:1; }","duration":"238.163748ms","start":"2025-10-26T07:50:33.049440Z","end":"2025-10-26T07:50:33.287604Z","steps":["trace[325708638] 'process raft request'  (duration: 237.878972ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T07:50:33.288245Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.309898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T07:50:33.288760Z","caller":"traceutil/trace.go:172","msg":"trace[920991495] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1221; }","duration":"133.831818ms","start":"2025-10-26T07:50:33.154917Z","end":"2025-10-26T07:50:33.288749Z","steps":["trace[920991495] 'agreement among raft nodes before linearized reading'  (duration: 133.274242ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T07:50:33.288777Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.336063ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T07:50:33.288888Z","caller":"traceutil/trace.go:172","msg":"trace[1072298916] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1220; }","duration":"151.775118ms","start":"2025-10-26T07:50:33.137105Z","end":"2025-10-26T07:50:33.288880Z","steps":["trace[1072298916] 'agreement among raft nodes before linearized reading'  (duration: 150.313967ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T07:51:09.598420Z","caller":"traceutil/trace.go:172","msg":"trace[837996055] transaction","detail":"{read_only:false; response_revision:1270; number_of_response:1; }","duration":"128.200594ms","start":"2025-10-26T07:51:09.470205Z","end":"2025-10-26T07:51:09.598406Z","steps":["trace[837996055] 'process raft request'  (duration: 128.017988ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T07:51:24.292022Z","caller":"traceutil/trace.go:172","msg":"trace[246491061] transaction","detail":"{read_only:false; response_revision:1328; number_of_response:1; }","duration":"145.563315ms","start":"2025-10-26T07:51:24.146446Z","end":"2025-10-26T07:51:24.292009Z","steps":["trace[246491061] 'process raft request'  (duration: 145.467675ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T07:51:50.628273Z","caller":"traceutil/trace.go:172","msg":"trace[43209921] transaction","detail":"{read_only:false; response_revision:1495; number_of_response:1; }","duration":"109.537334ms","start":"2025-10-26T07:51:50.518712Z","end":"2025-10-26T07:51:50.628249Z","steps":["trace[43209921] 'process raft request'  (duration: 109.441998ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T07:51:50.862147Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.246013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T07:51:50.862208Z","caller":"traceutil/trace.go:172","msg":"trace[741306198] range","detail":"{range_begin:/registry/priorityclasses; range_end:; response_count:0; response_revision:1495; }","duration":"124.336568ms","start":"2025-10-26T07:51:50.737862Z","end":"2025-10-26T07:51:50.862199Z","steps":["trace[741306198] 'range keys from in-memory index tree'  (duration: 124.159984ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T07:52:17.134007Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"229.014983ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7572409283510049639 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/default/cloud-spanner-emulator-86bd5cbb97-dnrwl\" mod_revision:1713 > success:<request_delete_range:<key:\"/registry/pods/default/cloud-spanner-emulator-86bd5cbb97-dnrwl\" > > failure:<request_range:<key:\"/registry/pods/default/cloud-spanner-emulator-86bd5cbb97-dnrwl\" > >>","response":"size:18"}
	{"level":"info","ts":"2025-10-26T07:52:17.134128Z","caller":"traceutil/trace.go:172","msg":"trace[641644026] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1714; }","duration":"327.97225ms","start":"2025-10-26T07:52:16.806141Z","end":"2025-10-26T07:52:17.134114Z","steps":["trace[641644026] 'process raft request'  (duration: 98.456402ms)","trace[641644026] 'compare'  (duration: 228.849556ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T07:52:17.134173Z","caller":"traceutil/trace.go:172","msg":"trace[736001955] transaction","detail":"{read_only:false; response_revision:1715; number_of_response:1; }","duration":"283.920569ms","start":"2025-10-26T07:52:16.850245Z","end":"2025-10-26T07:52:17.134165Z","steps":["trace[736001955] 'process raft request'  (duration: 283.871578ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T07:52:17.134248Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T07:52:16.806116Z","time spent":"328.059413ms","remote":"127.0.0.1:53926","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":66,"response count":0,"response size":42,"request content":"compare:<target:MOD key:\"/registry/pods/default/cloud-spanner-emulator-86bd5cbb97-dnrwl\" mod_revision:1713 > success:<request_delete_range:<key:\"/registry/pods/default/cloud-spanner-emulator-86bd5cbb97-dnrwl\" > > failure:<request_range:<key:\"/registry/pods/default/cloud-spanner-emulator-86bd5cbb97-dnrwl\" > >"}
	{"level":"info","ts":"2025-10-26T07:52:17.134560Z","caller":"traceutil/trace.go:172","msg":"trace[1036621560] linearizableReadLoop","detail":"{readStateIndex:1783; appliedIndex:1782; }","duration":"206.69277ms","start":"2025-10-26T07:52:16.927772Z","end":"2025-10-26T07:52:17.134465Z","steps":["trace[1036621560] 'read index received'  (duration: 205.158149ms)","trace[1036621560] 'applied index is now lower than readState.Index'  (duration: 1.534304ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T07:52:17.134624Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"206.919726ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T07:52:17.134647Z","caller":"traceutil/trace.go:172","msg":"trace[772099768] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1715; }","duration":"206.938652ms","start":"2025-10-26T07:52:16.927695Z","end":"2025-10-26T07:52:17.134634Z","steps":["trace[772099768] 'agreement among raft nodes before linearized reading'  (duration: 206.903971ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T07:52:17.134751Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"199.496487ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T07:52:17.134766Z","caller":"traceutil/trace.go:172","msg":"trace[22683861] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:1715; }","duration":"199.512134ms","start":"2025-10-26T07:52:16.935249Z","end":"2025-10-26T07:52:17.134761Z","steps":["trace[22683861] 'agreement among raft nodes before linearized reading'  (duration: 199.481654ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T07:52:17.134913Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.8358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-10-26T07:52:17.134927Z","caller":"traceutil/trace.go:172","msg":"trace[1037278918] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1715; }","duration":"140.854199ms","start":"2025-10-26T07:52:16.994069Z","end":"2025-10-26T07:52:17.134923Z","steps":["trace[1037278918] 'agreement among raft nodes before linearized reading'  (duration: 140.787748ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T07:52:17.134987Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.18894ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T07:52:17.134998Z","caller":"traceutil/trace.go:172","msg":"trace[579596679] range","detail":"{range_begin:/registry/prioritylevelconfigurations; range_end:; response_count:0; response_revision:1715; }","duration":"144.262146ms","start":"2025-10-26T07:52:16.990732Z","end":"2025-10-26T07:52:17.134994Z","steps":["trace[579596679] 'agreement among raft nodes before linearized reading'  (duration: 144.237883ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T07:52:42.697659Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"256.16341ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T07:52:42.697738Z","caller":"traceutil/trace.go:172","msg":"trace[1736312219] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1888; }","duration":"256.324785ms","start":"2025-10-26T07:52:42.441397Z","end":"2025-10-26T07:52:42.697722Z","steps":["trace[1736312219] 'range keys from in-memory index tree'  (duration: 255.967651ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:54:35 up 6 min,  0 users,  load average: 0.83, 0.87, 0.48
	Linux addons-465751 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [fcfe6790934e696db48f78e8ab9c20df62e9f88919d8a5a803105137d47b9ff6] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1026 07:49:24.460887       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.171.124:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.171.124:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1026 07:49:24.482617       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1026 07:51:32.573805       1 conn.go:339] Error on socket receive: read tcp 192.168.39.128:8443->192.168.39.1:57500: use of closed network connection
	E1026 07:51:32.774807       1 conn.go:339] Error on socket receive: read tcp 192.168.39.128:8443->192.168.39.1:57526: use of closed network connection
	I1026 07:51:42.001657       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.255.41"}
	I1026 07:52:07.341710       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1026 07:52:07.533762       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.13.174"}
	E1026 07:52:12.850537       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1026 07:52:25.398069       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1026 07:52:25.481322       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1026 07:52:41.777660       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 07:52:41.777777       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 07:52:41.802965       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 07:52:41.803067       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 07:52:41.831315       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 07:52:41.831369       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 07:52:41.870759       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 07:52:41.870892       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1026 07:52:42.813377       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1026 07:52:42.871294       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1026 07:52:42.968056       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1026 07:54:33.769395       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.46.80"}
	
	
	==> kube-controller-manager [97033e19f35ac749b36cf25423c2a4736d39f2ebad370df898b7fedbc585af57] <==
	E1026 07:52:50.579744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 07:52:51.600613       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 07:52:51.601666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 07:52:56.785931       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 07:52:56.786995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 07:52:59.318214       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 07:52:59.319561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 07:53:02.938204       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 07:53:02.939302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1026 07:53:08.832955       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1026 07:53:08.832992       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 07:53:08.867957       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1026 07:53:08.867999       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1026 07:53:17.058117       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 07:53:17.059152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 07:53:19.856724       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 07:53:19.858030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 07:53:23.485098       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 07:53:23.486191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 07:53:45.918270       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 07:53:45.919581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 07:53:56.867970       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 07:53:56.869053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 07:54:07.017331       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 07:54:07.018462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [74df49c7e9a82f8b06e4c28154546c647740bd1c20751b5b8869acc3d7e4c434] <==
	I1026 07:48:40.738861       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 07:48:40.839651       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 07:48:40.839685       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.128"]
	E1026 07:48:40.839760       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 07:48:41.167009       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1026 07:48:41.167122       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 07:48:41.167164       1 server_linux.go:132] "Using iptables Proxier"
	I1026 07:48:41.199724       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 07:48:41.203622       1 server.go:527] "Version info" version="v1.34.1"
	I1026 07:48:41.203638       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 07:48:41.214188       1 config.go:200] "Starting service config controller"
	I1026 07:48:41.214215       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 07:48:41.214231       1 config.go:106] "Starting endpoint slice config controller"
	I1026 07:48:41.214234       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 07:48:41.214246       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 07:48:41.214249       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 07:48:41.215177       1 config.go:309] "Starting node config controller"
	I1026 07:48:41.215187       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 07:48:41.215192       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 07:48:41.314592       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 07:48:41.322352       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 07:48:41.322865       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [facb7370484d275d776da44dab41815765124c7c54b7aeb6073790b5474d181d] <==
	E1026 07:48:31.780747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 07:48:31.782583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 07:48:31.782666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 07:48:31.782790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 07:48:31.795374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 07:48:31.795721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 07:48:31.795955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 07:48:31.796041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 07:48:31.796134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 07:48:31.796279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 07:48:32.626005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 07:48:32.637588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 07:48:32.652662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 07:48:32.733557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 07:48:32.748192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 07:48:32.775515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 07:48:32.821442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 07:48:32.824067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 07:48:32.930033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 07:48:32.953775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 07:48:33.013969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 07:48:33.054916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 07:48:33.068271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 07:48:33.272033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1026 07:48:35.868091       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 07:52:47 addons-465751 kubelet[1501]: I1026 07:52:47.173140    1501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05f50167-0933-4ef4-b6ee-fe8d3650d49b" path="/var/lib/kubelet/pods/05f50167-0933-4ef4-b6ee-fe8d3650d49b/volumes"
	Oct 26 07:52:49 addons-465751 kubelet[1501]: I1026 07:52:49.164275    1501 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 07:52:55 addons-465751 kubelet[1501]: E1026 07:52:55.580263    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761465175579410932  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:52:55 addons-465751 kubelet[1501]: E1026 07:52:55.580304    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761465175579410932  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:53:05 addons-465751 kubelet[1501]: E1026 07:53:05.584255    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761465185583021919  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:53:05 addons-465751 kubelet[1501]: E1026 07:53:05.584276    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761465185583021919  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:53:15 addons-465751 kubelet[1501]: E1026 07:53:15.589462    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761465195587883307  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:53:15 addons-465751 kubelet[1501]: E1026 07:53:15.589656    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761465195587883307  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:53:25 addons-465751 kubelet[1501]: E1026 07:53:25.591816    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761465205591451192  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:53:25 addons-465751 kubelet[1501]: E1026 07:53:25.591841    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761465205591451192  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:53:35 addons-465751 kubelet[1501]: E1026 07:53:35.594234    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761465215593417434  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:53:35 addons-465751 kubelet[1501]: E1026 07:53:35.594765    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761465215593417434  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:53:41 addons-465751 kubelet[1501]: I1026 07:53:41.162971    1501 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-bn844" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 07:53:45 addons-465751 kubelet[1501]: E1026 07:53:45.597306    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761465225596957332  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:53:45 addons-465751 kubelet[1501]: E1026 07:53:45.597331    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761465225596957332  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:53:55 addons-465751 kubelet[1501]: E1026 07:53:55.602422    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761465235601323732  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:53:55 addons-465751 kubelet[1501]: E1026 07:53:55.602861    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761465235601323732  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:53:57 addons-465751 kubelet[1501]: I1026 07:53:57.164624    1501 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 07:54:05 addons-465751 kubelet[1501]: E1026 07:54:05.605163    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761465245604777928  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:54:05 addons-465751 kubelet[1501]: E1026 07:54:05.605210    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761465245604777928  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:54:15 addons-465751 kubelet[1501]: E1026 07:54:15.607826    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761465255606883145  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:54:15 addons-465751 kubelet[1501]: E1026 07:54:15.607850    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761465255606883145  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:54:25 addons-465751 kubelet[1501]: E1026 07:54:25.617766    1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761465265616024404  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:54:25 addons-465751 kubelet[1501]: E1026 07:54:25.618093    1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761465265616024404  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 07:54:33 addons-465751 kubelet[1501]: I1026 07:54:33.788719    1501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wsrh\" (UniqueName: \"kubernetes.io/projected/b72438f4-ebee-4822-a0e7-7d7c26334b40-kube-api-access-7wsrh\") pod \"hello-world-app-5d498dc89-7l8xz\" (UID: \"b72438f4-ebee-4822-a0e7-7d7c26334b40\") " pod="default/hello-world-app-5d498dc89-7l8xz"
	
	
	==> storage-provisioner [04be386c27499f20abd49cfbb426cac0e7dc0c61e8c2325a71ae59db755626ba] <==
	W1026 07:54:10.669998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:12.673843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:12.681830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:14.686645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:14.692340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:16.695684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:16.701562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:18.704827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:18.709622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:20.713327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:20.721171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:22.724780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:22.730089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:24.733553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:24.740997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:26.744763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:26.751333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:28.755074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:28.761311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:30.765661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:30.771046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:32.775033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:32.781936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:34.785998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:34.792360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-465751 -n addons-465751
helpers_test.go:269: (dbg) Run:  kubectl --context addons-465751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-7l8xz ingress-nginx-admission-create-bxcsh ingress-nginx-admission-patch-b2ss5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-465751 describe pod hello-world-app-5d498dc89-7l8xz ingress-nginx-admission-create-bxcsh ingress-nginx-admission-patch-b2ss5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-465751 describe pod hello-world-app-5d498dc89-7l8xz ingress-nginx-admission-create-bxcsh ingress-nginx-admission-patch-b2ss5: exit status 1 (72.533834ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-7l8xz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-465751/192.168.39.128
	Start Time:       Sun, 26 Oct 2025 07:54:33 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7wsrh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7wsrh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-7l8xz to addons-465751
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bxcsh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-b2ss5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-465751 describe pod hello-world-app-5d498dc89-7l8xz ingress-nginx-admission-create-bxcsh ingress-nginx-admission-patch-b2ss5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-465751 addons disable ingress-dns --alsologtostderr -v=1: (1.764159525s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-465751 addons disable ingress --alsologtostderr -v=1: (7.700037108s)
--- FAIL: TestAddons/parallel/Ingress (158.41s)

                                                
                                    
x
+
TestPreload (153.39s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-412971 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-412971 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m28.441008155s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-412971 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-412971 image pull gcr.io/k8s-minikube/busybox: (3.445010561s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-412971
E1026 08:43:29.912241   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-412971: (6.943944985s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-412971 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-412971 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (51.727462073s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-412971 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-26 08:44:25.661109418 +0000 UTC m=+3421.901777317
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-412971 -n test-preload-412971
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-412971 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-412971 logs -n 25: (1.090183367s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-033904 ssh -n multinode-033904-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ ssh     │ multinode-033904 ssh -n multinode-033904 sudo cat /home/docker/cp-test_multinode-033904-m03_multinode-033904.txt                                          │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ cp      │ multinode-033904 cp multinode-033904-m03:/home/docker/cp-test.txt multinode-033904-m02:/home/docker/cp-test_multinode-033904-m03_multinode-033904-m02.txt │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ ssh     │ multinode-033904 ssh -n multinode-033904-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ ssh     │ multinode-033904 ssh -n multinode-033904-m02 sudo cat /home/docker/cp-test_multinode-033904-m03_multinode-033904-m02.txt                                  │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ node    │ multinode-033904 node stop m03                                                                                                                            │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ node    │ multinode-033904 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ node    │ list -p multinode-033904                                                                                                                                  │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ stop    │ -p multinode-033904                                                                                                                                       │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:34 UTC │
	│ start   │ -p multinode-033904 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:34 UTC │ 26 Oct 25 08:36 UTC │
	│ node    │ list -p multinode-033904                                                                                                                                  │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:36 UTC │                     │
	│ node    │ multinode-033904 node delete m03                                                                                                                          │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:36 UTC │ 26 Oct 25 08:37 UTC │
	│ stop    │ multinode-033904 stop                                                                                                                                     │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:37 UTC │ 26 Oct 25 08:39 UTC │
	│ start   │ -p multinode-033904 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:39 UTC │ 26 Oct 25 08:41 UTC │
	│ node    │ list -p multinode-033904                                                                                                                                  │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:41 UTC │                     │
	│ start   │ -p multinode-033904-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-033904-m02 │ jenkins │ v1.37.0 │ 26 Oct 25 08:41 UTC │                     │
	│ start   │ -p multinode-033904-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-033904-m03 │ jenkins │ v1.37.0 │ 26 Oct 25 08:41 UTC │ 26 Oct 25 08:41 UTC │
	│ node    │ add -p multinode-033904                                                                                                                                   │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:41 UTC │                     │
	│ delete  │ -p multinode-033904-m03                                                                                                                                   │ multinode-033904-m03 │ jenkins │ v1.37.0 │ 26 Oct 25 08:41 UTC │ 26 Oct 25 08:41 UTC │
	│ delete  │ -p multinode-033904                                                                                                                                       │ multinode-033904     │ jenkins │ v1.37.0 │ 26 Oct 25 08:41 UTC │ 26 Oct 25 08:41 UTC │
	│ start   │ -p test-preload-412971 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-412971  │ jenkins │ v1.37.0 │ 26 Oct 25 08:41 UTC │ 26 Oct 25 08:43 UTC │
	│ image   │ test-preload-412971 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-412971  │ jenkins │ v1.37.0 │ 26 Oct 25 08:43 UTC │ 26 Oct 25 08:43 UTC │
	│ stop    │ -p test-preload-412971                                                                                                                                    │ test-preload-412971  │ jenkins │ v1.37.0 │ 26 Oct 25 08:43 UTC │ 26 Oct 25 08:43 UTC │
	│ start   │ -p test-preload-412971 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-412971  │ jenkins │ v1.37.0 │ 26 Oct 25 08:43 UTC │ 26 Oct 25 08:44 UTC │
	│ image   │ test-preload-412971 image list                                                                                                                            │ test-preload-412971  │ jenkins │ v1.37.0 │ 26 Oct 25 08:44 UTC │ 26 Oct 25 08:44 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:43:33
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:43:33.788509   37292 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:43:33.788793   37292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:43:33.788804   37292 out.go:374] Setting ErrFile to fd 2...
	I1026 08:43:33.788810   37292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:43:33.788986   37292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
	I1026 08:43:33.789439   37292 out.go:368] Setting JSON to false
	I1026 08:43:33.790286   37292 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5158,"bootTime":1761463056,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:43:33.790377   37292 start.go:141] virtualization: kvm guest
	I1026 08:43:33.792393   37292 out.go:179] * [test-preload-412971] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:43:33.793695   37292 notify.go:220] Checking for updates...
	I1026 08:43:33.793703   37292 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:43:33.795000   37292 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:43:33.796152   37292 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9405/kubeconfig
	I1026 08:43:33.797314   37292 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9405/.minikube
	I1026 08:43:33.798348   37292 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:43:33.799465   37292 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:43:33.800933   37292 config.go:182] Loaded profile config "test-preload-412971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1026 08:43:33.802495   37292 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1026 08:43:33.803528   37292 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:43:33.836848   37292 out.go:179] * Using the kvm2 driver based on existing profile
	I1026 08:43:33.837993   37292 start.go:305] selected driver: kvm2
	I1026 08:43:33.838010   37292 start.go:925] validating driver "kvm2" against &{Name:test-preload-412971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-412971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:43:33.838123   37292 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:43:33.839058   37292 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:43:33.839107   37292 cni.go:84] Creating CNI manager for ""
	I1026 08:43:33.839157   37292 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 08:43:33.839196   37292 start.go:349] cluster config:
	{Name:test-preload-412971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-412971 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:43:33.839282   37292 iso.go:125] acquiring lock: {Name:mk96f67d8329fb7692bdfa7d5182ebbf9e1ba018 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:43:33.840765   37292 out.go:179] * Starting "test-preload-412971" primary control-plane node in "test-preload-412971" cluster
	I1026 08:43:33.841856   37292 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1026 08:43:34.298451   37292 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1026 08:43:34.298491   37292 cache.go:58] Caching tarball of preloaded images
	I1026 08:43:34.298677   37292 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1026 08:43:34.300464   37292 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1026 08:43:34.301504   37292 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1026 08:43:34.400387   37292 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1026 08:43:34.400433   37292 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21772-9405/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1026 08:43:43.965203   37292 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1026 08:43:43.965397   37292 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/config.json ...
	I1026 08:43:43.965669   37292 start.go:360] acquireMachinesLock for test-preload-412971: {Name:mk311ee0c6906dab6c982970197b91c6534b0fc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 08:43:43.965749   37292 start.go:364] duration metric: took 51.485µs to acquireMachinesLock for "test-preload-412971"
	I1026 08:43:43.965770   37292 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:43:43.965777   37292 fix.go:54] fixHost starting: 
	I1026 08:43:43.967964   37292 fix.go:112] recreateIfNeeded on test-preload-412971: state=Stopped err=<nil>
	W1026 08:43:43.967988   37292 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:43:43.969437   37292 out.go:252] * Restarting existing kvm2 VM for "test-preload-412971" ...
	I1026 08:43:43.969487   37292 main.go:141] libmachine: starting domain...
	I1026 08:43:43.969504   37292 main.go:141] libmachine: ensuring networks are active...
	I1026 08:43:43.970282   37292 main.go:141] libmachine: Ensuring network default is active
	I1026 08:43:43.970708   37292 main.go:141] libmachine: Ensuring network mk-test-preload-412971 is active
	I1026 08:43:43.971133   37292 main.go:141] libmachine: getting domain XML...
	I1026 08:43:43.972138   37292 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-412971</name>
	  <uuid>15706c8f-a67a-4537-bb95-853e91adf749</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21772-9405/.minikube/machines/test-preload-412971/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21772-9405/.minikube/machines/test-preload-412971/test-preload-412971.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:b6:22:c7'/>
	      <source network='mk-test-preload-412971'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:47:58:41'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1026 08:43:45.240016   37292 main.go:141] libmachine: waiting for domain to start...
	I1026 08:43:45.241469   37292 main.go:141] libmachine: domain is now running
	I1026 08:43:45.241487   37292 main.go:141] libmachine: waiting for IP...
	I1026 08:43:45.242394   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:45.242935   37292 main.go:141] libmachine: domain test-preload-412971 has current primary IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:45.242951   37292 main.go:141] libmachine: found domain IP: 192.168.39.123
	I1026 08:43:45.242958   37292 main.go:141] libmachine: reserving static IP address...
	I1026 08:43:45.243384   37292 main.go:141] libmachine: found host DHCP lease matching {name: "test-preload-412971", mac: "52:54:00:b6:22:c7", ip: "192.168.39.123"} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:42:09 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:45.243419   37292 main.go:141] libmachine: skip adding static IP to network mk-test-preload-412971 - found existing host DHCP lease matching {name: "test-preload-412971", mac: "52:54:00:b6:22:c7", ip: "192.168.39.123"}
	I1026 08:43:45.243436   37292 main.go:141] libmachine: reserved static IP address 192.168.39.123 for domain test-preload-412971
	I1026 08:43:45.243449   37292 main.go:141] libmachine: waiting for SSH...
	I1026 08:43:45.243460   37292 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 08:43:45.245784   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:45.246127   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:42:09 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:45.246154   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:45.246306   37292 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:45.246513   37292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1026 08:43:45.246522   37292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 08:43:48.320380   37292 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.123:22: connect: no route to host
	I1026 08:43:54.401310   37292 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.123:22: connect: no route to host
	I1026 08:43:57.527378   37292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:43:57.530756   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:57.531292   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:57.531321   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:57.531609   37292 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/config.json ...
	I1026 08:43:57.531849   37292 machine.go:93] provisionDockerMachine start ...
	I1026 08:43:57.534570   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:57.534969   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:57.534998   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:57.535292   37292 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:57.535510   37292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1026 08:43:57.535527   37292 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:43:57.663022   37292 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 08:43:57.663050   37292 buildroot.go:166] provisioning hostname "test-preload-412971"
	I1026 08:43:57.666154   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:57.666625   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:57.666659   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:57.666879   37292 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:57.667178   37292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1026 08:43:57.667198   37292 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-412971 && echo "test-preload-412971" | sudo tee /etc/hostname
	I1026 08:43:57.800666   37292 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-412971
	
	I1026 08:43:57.803422   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:57.803842   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:57.803925   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:57.804202   37292 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:57.804473   37292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1026 08:43:57.804499   37292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-412971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-412971/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-412971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:43:57.931056   37292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:43:57.931114   37292 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9405/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9405/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9405/.minikube}
	I1026 08:43:57.931137   37292 buildroot.go:174] setting up certificates
	I1026 08:43:57.931146   37292 provision.go:84] configureAuth start
	I1026 08:43:57.934163   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:57.934609   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:57.934643   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:57.937054   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:57.937405   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:57.937425   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:57.937531   37292 provision.go:143] copyHostCerts
	I1026 08:43:57.937589   37292 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9405/.minikube/ca.pem, removing ...
	I1026 08:43:57.937603   37292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9405/.minikube/ca.pem
	I1026 08:43:57.937686   37292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9405/.minikube/ca.pem (1078 bytes)
	I1026 08:43:57.937785   37292 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9405/.minikube/cert.pem, removing ...
	I1026 08:43:57.937793   37292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9405/.minikube/cert.pem
	I1026 08:43:57.937832   37292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9405/.minikube/cert.pem (1123 bytes)
	I1026 08:43:57.937946   37292 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9405/.minikube/key.pem, removing ...
	I1026 08:43:57.937957   37292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9405/.minikube/key.pem
	I1026 08:43:57.937982   37292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9405/.minikube/key.pem (1675 bytes)
	I1026 08:43:57.938046   37292 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9405/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca-key.pem org=jenkins.test-preload-412971 san=[127.0.0.1 192.168.39.123 localhost minikube test-preload-412971]
	I1026 08:43:58.131770   37292 provision.go:177] copyRemoteCerts
	I1026 08:43:58.131840   37292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:43:58.134557   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:58.134965   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:58.134993   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:58.135165   37292 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/test-preload-412971/id_rsa Username:docker}
	I1026 08:43:58.225144   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:43:58.254140   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1026 08:43:58.282569   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:43:58.310615   37292 provision.go:87] duration metric: took 379.454221ms to configureAuth
	I1026 08:43:58.310644   37292 buildroot.go:189] setting minikube options for container-runtime
	I1026 08:43:58.310815   37292 config.go:182] Loaded profile config "test-preload-412971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1026 08:43:58.313771   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:58.314215   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:58.314245   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:58.314422   37292 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:58.314695   37292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1026 08:43:58.314716   37292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:43:58.583762   37292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:43:58.583783   37292 machine.go:96] duration metric: took 1.051919524s to provisionDockerMachine
	I1026 08:43:58.583800   37292 start.go:293] postStartSetup for "test-preload-412971" (driver="kvm2")
	I1026 08:43:58.583810   37292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:43:58.583880   37292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:43:58.586876   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:58.587388   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:58.587424   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:58.587576   37292 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/test-preload-412971/id_rsa Username:docker}
	I1026 08:43:58.679273   37292 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:43:58.683988   37292 info.go:137] Remote host: Buildroot 2025.02
	I1026 08:43:58.684023   37292 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9405/.minikube/addons for local assets ...
	I1026 08:43:58.684114   37292 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9405/.minikube/files for local assets ...
	I1026 08:43:58.684207   37292 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9405/.minikube/files/etc/ssl/certs/133212.pem -> 133212.pem in /etc/ssl/certs
	I1026 08:43:58.684299   37292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:43:58.695864   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/files/etc/ssl/certs/133212.pem --> /etc/ssl/certs/133212.pem (1708 bytes)
	I1026 08:43:58.724934   37292 start.go:296] duration metric: took 141.120775ms for postStartSetup
	I1026 08:43:58.724976   37292 fix.go:56] duration metric: took 14.759199316s for fixHost
	I1026 08:43:58.727725   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:58.728099   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:58.728124   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:58.728271   37292 main.go:141] libmachine: Using SSH client type: native
	I1026 08:43:58.728470   37292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I1026 08:43:58.728482   37292 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 08:43:58.848690   37292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761468238.816921671
	
	I1026 08:43:58.848716   37292 fix.go:216] guest clock: 1761468238.816921671
	I1026 08:43:58.848724   37292 fix.go:229] Guest: 2025-10-26 08:43:58.816921671 +0000 UTC Remote: 2025-10-26 08:43:58.724981222 +0000 UTC m=+24.984402652 (delta=91.940449ms)
	I1026 08:43:58.848738   37292 fix.go:200] guest clock delta is within tolerance: 91.940449ms
	I1026 08:43:58.848743   37292 start.go:83] releasing machines lock for "test-preload-412971", held for 14.882981243s
	I1026 08:43:58.851680   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:58.852113   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:58.852144   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:58.852714   37292 ssh_runner.go:195] Run: cat /version.json
	I1026 08:43:58.852782   37292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:43:58.855669   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:58.855886   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:58.856117   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:58.856149   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:58.856316   37292 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/test-preload-412971/id_rsa Username:docker}
	I1026 08:43:58.856331   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:43:58.856364   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:43:58.856530   37292 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/test-preload-412971/id_rsa Username:docker}
	I1026 08:43:58.939175   37292 ssh_runner.go:195] Run: systemctl --version
	I1026 08:43:58.971998   37292 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:43:59.119422   37292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:43:59.126140   37292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:43:59.126210   37292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:43:59.146311   37292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 08:43:59.146336   37292 start.go:495] detecting cgroup driver to use...
	I1026 08:43:59.146392   37292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:43:59.164034   37292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:43:59.180928   37292 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:43:59.180983   37292 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:43:59.198081   37292 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:43:59.214446   37292 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:43:59.366134   37292 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:43:59.586425   37292 docker.go:234] disabling docker service ...
	I1026 08:43:59.586513   37292 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:43:59.604250   37292 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:43:59.620882   37292 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:43:59.775950   37292 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:43:59.919467   37292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:43:59.935964   37292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:43:59.958502   37292 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 08:43:59.958562   37292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:59.970517   37292 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 08:43:59.970588   37292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:59.983245   37292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:43:59.995497   37292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:44:00.007308   37292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:44:00.020824   37292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:44:00.033287   37292 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:44:00.053273   37292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:44:00.065994   37292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:44:00.076675   37292 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 08:44:00.076756   37292 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 08:44:00.095933   37292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:44:00.107647   37292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:44:00.246656   37292 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:44:00.369250   37292 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:44:00.369317   37292 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:44:00.374627   37292 start.go:563] Will wait 60s for crictl version
	I1026 08:44:00.374709   37292 ssh_runner.go:195] Run: which crictl
	I1026 08:44:00.379069   37292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 08:44:00.420610   37292 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 08:44:00.420704   37292 ssh_runner.go:195] Run: crio --version
	I1026 08:44:00.449796   37292 ssh_runner.go:195] Run: crio --version
	I1026 08:44:00.481608   37292 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1026 08:44:00.485518   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:44:00.485934   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:44:00.485958   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:44:00.486199   37292 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 08:44:00.490900   37292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:44:00.506836   37292 kubeadm.go:883] updating cluster {Name:test-preload-412971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-412971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:44:00.506944   37292 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1026 08:44:00.506986   37292 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:44:00.551149   37292 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1026 08:44:00.551227   37292 ssh_runner.go:195] Run: which lz4
	I1026 08:44:00.555355   37292 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 08:44:00.560179   37292 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 08:44:00.560210   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1026 08:44:01.993826   37292 crio.go:462] duration metric: took 1.438510044s to copy over tarball
	I1026 08:44:01.993900   37292 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 08:44:03.702646   37292 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.708716241s)
	I1026 08:44:03.702676   37292 crio.go:469] duration metric: took 1.708823509s to extract the tarball
	I1026 08:44:03.702685   37292 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 08:44:03.742700   37292 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:44:03.787584   37292 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:44:03.787610   37292 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:44:03.787617   37292 kubeadm.go:934] updating node { 192.168.39.123 8443 v1.32.0 crio true true} ...
	I1026 08:44:03.787698   37292 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-412971 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-412971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:44:03.787756   37292 ssh_runner.go:195] Run: crio config
	I1026 08:44:03.831323   37292 cni.go:84] Creating CNI manager for ""
	I1026 08:44:03.831352   37292 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 08:44:03.831373   37292 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:44:03.831403   37292 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-412971 NodeName:test-preload-412971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:44:03.831596   37292 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-412971"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.123"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:44:03.831684   37292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1026 08:44:03.843429   37292 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:44:03.843504   37292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:44:03.854874   37292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1026 08:44:03.875576   37292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:44:03.895945   37292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1026 08:44:03.916623   37292 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I1026 08:44:03.920794   37292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:44:03.934824   37292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:44:04.081787   37292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:44:04.112962   37292 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971 for IP: 192.168.39.123
	I1026 08:44:04.112988   37292 certs.go:195] generating shared ca certs ...
	I1026 08:44:04.113012   37292 certs.go:227] acquiring lock for ca certs: {Name:mk0cc452f34380f71cd1e1f6ef82498430bd406d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:44:04.113248   37292 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9405/.minikube/ca.key
	I1026 08:44:04.113314   37292 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9405/.minikube/proxy-client-ca.key
	I1026 08:44:04.113328   37292 certs.go:257] generating profile certs ...
	I1026 08:44:04.113437   37292 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/client.key
	I1026 08:44:04.113519   37292 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/apiserver.key.91b81e41
	I1026 08:44:04.113572   37292 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/proxy-client.key
	I1026 08:44:04.113741   37292 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/13321.pem (1338 bytes)
	W1026 08:44:04.113786   37292 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-9405/.minikube/certs/13321_empty.pem, impossibly tiny 0 bytes
	I1026 08:44:04.113801   37292 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:44:04.113837   37292 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:44:04.113874   37292 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:44:04.113907   37292 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9405/.minikube/certs/key.pem (1675 bytes)
	I1026 08:44:04.113965   37292 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9405/.minikube/files/etc/ssl/certs/133212.pem (1708 bytes)
	I1026 08:44:04.114822   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:44:04.151828   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:44:04.188571   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:44:04.218698   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1026 08:44:04.248686   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 08:44:04.279042   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 08:44:04.308978   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:44:04.337020   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:44:04.366907   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/certs/13321.pem --> /usr/share/ca-certificates/13321.pem (1338 bytes)
	I1026 08:44:04.396288   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/files/etc/ssl/certs/133212.pem --> /usr/share/ca-certificates/133212.pem (1708 bytes)
	I1026 08:44:04.425838   37292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9405/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:44:04.454786   37292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:44:04.476165   37292 ssh_runner.go:195] Run: openssl version
	I1026 08:44:04.482974   37292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13321.pem && ln -fs /usr/share/ca-certificates/13321.pem /etc/ssl/certs/13321.pem"
	I1026 08:44:04.496390   37292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13321.pem
	I1026 08:44:04.501831   37292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:56 /usr/share/ca-certificates/13321.pem
	I1026 08:44:04.501905   37292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13321.pem
	I1026 08:44:04.509457   37292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13321.pem /etc/ssl/certs/51391683.0"
	I1026 08:44:04.522378   37292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/133212.pem && ln -fs /usr/share/ca-certificates/133212.pem /etc/ssl/certs/133212.pem"
	I1026 08:44:04.535693   37292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133212.pem
	I1026 08:44:04.541197   37292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:56 /usr/share/ca-certificates/133212.pem
	I1026 08:44:04.541326   37292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133212.pem
	I1026 08:44:04.548801   37292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/133212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:44:04.561792   37292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:44:04.574057   37292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:44:04.579113   37292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:48 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:44:04.579185   37292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:44:04.586346   37292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:44:04.599489   37292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:44:04.604616   37292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:44:04.611815   37292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:44:04.619226   37292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:44:04.626757   37292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:44:04.633793   37292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:44:04.640746   37292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:44:04.647589   37292 kubeadm.go:400] StartCluster: {Name:test-preload-412971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-412971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:44:04.647693   37292 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:44:04.647737   37292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:44:04.688313   37292 cri.go:89] found id: ""
	I1026 08:44:04.688386   37292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:44:04.701021   37292 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 08:44:04.701042   37292 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 08:44:04.701102   37292 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 08:44:04.712876   37292 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:44:04.713270   37292 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-412971" does not appear in /home/jenkins/minikube-integration/21772-9405/kubeconfig
	I1026 08:44:04.713389   37292 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-9405/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-412971" cluster setting kubeconfig missing "test-preload-412971" context setting]
	I1026 08:44:04.713661   37292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/kubeconfig: {Name:mk03435388f71a675261bd85aa1ac6a9492586b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:44:04.714198   37292 kapi.go:59] client config for test-preload-412971: &rest.Config{Host:"https://192.168.39.123:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/client.key", CAFile:"/home/jenkins/minikube-integration/21772-9405/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:44:04.714637   37292 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1026 08:44:04.714656   37292 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1026 08:44:04.714665   37292 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 08:44:04.714672   37292 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1026 08:44:04.714682   37292 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 08:44:04.715014   37292 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 08:44:04.725997   37292 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.123
	I1026 08:44:04.726030   37292 kubeadm.go:1160] stopping kube-system containers ...
	I1026 08:44:04.726040   37292 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1026 08:44:04.726101   37292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:44:04.767269   37292 cri.go:89] found id: ""
	I1026 08:44:04.767332   37292 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1026 08:44:04.785632   37292 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 08:44:04.797911   37292 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 08:44:04.797932   37292 kubeadm.go:157] found existing configuration files:
	
	I1026 08:44:04.797978   37292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 08:44:04.808711   37292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 08:44:04.808768   37292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 08:44:04.820381   37292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 08:44:04.831096   37292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 08:44:04.831163   37292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 08:44:04.842220   37292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 08:44:04.853519   37292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 08:44:04.853583   37292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 08:44:04.865609   37292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 08:44:04.875950   37292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 08:44:04.876003   37292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 08:44:04.886598   37292 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 08:44:04.898480   37292 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 08:44:04.953291   37292 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 08:44:05.621895   37292 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1026 08:44:05.883631   37292 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 08:44:05.958698   37292 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1026 08:44:06.044831   37292 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:44:06.044910   37292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:44:06.545770   37292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:44:07.045363   37292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:44:07.545226   37292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:44:08.045990   37292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:44:08.545747   37292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:44:08.577375   37292 api_server.go:72] duration metric: took 2.532552066s to wait for apiserver process to appear ...
	I1026 08:44:08.577410   37292 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:44:08.577433   37292 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I1026 08:44:11.067119   37292 api_server.go:279] https://192.168.39.123:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 08:44:11.067159   37292 api_server.go:103] status: https://192.168.39.123:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 08:44:11.067180   37292 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I1026 08:44:11.162899   37292 api_server.go:279] https://192.168.39.123:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:44:11.162937   37292 api_server.go:103] status: https://192.168.39.123:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:44:11.162958   37292 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I1026 08:44:11.181129   37292 api_server.go:279] https://192.168.39.123:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:44:11.181161   37292 api_server.go:103] status: https://192.168.39.123:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:44:11.577739   37292 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I1026 08:44:11.582468   37292 api_server.go:279] https://192.168.39.123:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:44:11.582500   37292 api_server.go:103] status: https://192.168.39.123:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:44:12.077860   37292 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I1026 08:44:12.086354   37292 api_server.go:279] https://192.168.39.123:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:44:12.086381   37292 api_server.go:103] status: https://192.168.39.123:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:44:12.578046   37292 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I1026 08:44:12.583456   37292 api_server.go:279] https://192.168.39.123:8443/healthz returned 200:
	ok
	I1026 08:44:12.591591   37292 api_server.go:141] control plane version: v1.32.0
	I1026 08:44:12.591618   37292 api_server.go:131] duration metric: took 4.014201243s to wait for apiserver health ...
	I1026 08:44:12.591629   37292 cni.go:84] Creating CNI manager for ""
	I1026 08:44:12.591636   37292 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 08:44:12.593434   37292 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 08:44:12.594544   37292 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 08:44:12.607766   37292 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 08:44:12.632273   37292 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:44:12.639519   37292 system_pods.go:59] 7 kube-system pods found
	I1026 08:44:12.639574   37292 system_pods.go:61] "coredns-668d6bf9bc-7gbd4" [b60fbfb3-2c2b-437d-9a09-3d38d43f9e67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:44:12.639588   37292 system_pods.go:61] "etcd-test-preload-412971" [635698cb-af54-41f9-b5da-28451b544f43] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:44:12.639599   37292 system_pods.go:61] "kube-apiserver-test-preload-412971" [1c518678-ab3d-4607-925a-76b2f7dcfd36] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:44:12.639609   37292 system_pods.go:61] "kube-controller-manager-test-preload-412971" [96c95e86-e467-4e38-910d-d91bab7dde4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:44:12.639625   37292 system_pods.go:61] "kube-proxy-2nwv8" [43757e2e-8646-484a-b189-e9e54f0722fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 08:44:12.639636   37292 system_pods.go:61] "kube-scheduler-test-preload-412971" [5ef0c645-12e6-4ecd-b87e-50cc20872658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:44:12.639650   37292 system_pods.go:61] "storage-provisioner" [6edc6fa4-10ee-41b0-8116-4028d3094a58] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:44:12.639661   37292 system_pods.go:74] duration metric: took 7.365284ms to wait for pod list to return data ...
	I1026 08:44:12.639676   37292 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:44:12.644564   37292 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 08:44:12.644611   37292 node_conditions.go:123] node cpu capacity is 2
	I1026 08:44:12.644630   37292 node_conditions.go:105] duration metric: took 4.946659ms to run NodePressure ...
	I1026 08:44:12.644693   37292 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 08:44:12.937678   37292 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1026 08:44:12.941541   37292 kubeadm.go:743] kubelet initialised
	I1026 08:44:12.941572   37292 kubeadm.go:744] duration metric: took 3.86616ms waiting for restarted kubelet to initialise ...
	I1026 08:44:12.941592   37292 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 08:44:12.956841   37292 ops.go:34] apiserver oom_adj: -16
	I1026 08:44:12.956865   37292 kubeadm.go:601] duration metric: took 8.255817539s to restartPrimaryControlPlane
	I1026 08:44:12.956874   37292 kubeadm.go:402] duration metric: took 8.309295754s to StartCluster
	I1026 08:44:12.956890   37292 settings.go:142] acquiring lock: {Name:mkae317b35dec50359a6773585fd9b9fe6191d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:44:12.956977   37292 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9405/kubeconfig
	I1026 08:44:12.957577   37292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/kubeconfig: {Name:mk03435388f71a675261bd85aa1ac6a9492586b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:44:12.957817   37292 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:44:12.957881   37292 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:44:12.957979   37292 addons.go:69] Setting storage-provisioner=true in profile "test-preload-412971"
	I1026 08:44:12.957998   37292 addons.go:238] Setting addon storage-provisioner=true in "test-preload-412971"
	W1026 08:44:12.958007   37292 addons.go:247] addon storage-provisioner should already be in state true
	I1026 08:44:12.958013   37292 addons.go:69] Setting default-storageclass=true in profile "test-preload-412971"
	I1026 08:44:12.958035   37292 host.go:66] Checking if "test-preload-412971" exists ...
	I1026 08:44:12.958045   37292 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-412971"
	I1026 08:44:12.958053   37292 config.go:182] Loaded profile config "test-preload-412971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1026 08:44:12.960190   37292 out.go:179] * Verifying Kubernetes components...
	I1026 08:44:12.960377   37292 kapi.go:59] client config for test-preload-412971: &rest.Config{Host:"https://192.168.39.123:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/client.key", CAFile:"/home/jenkins/minikube-integration/21772-9405/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:44:12.960765   37292 addons.go:238] Setting addon default-storageclass=true in "test-preload-412971"
	W1026 08:44:12.960783   37292 addons.go:247] addon default-storageclass should already be in state true
	I1026 08:44:12.960805   37292 host.go:66] Checking if "test-preload-412971" exists ...
	I1026 08:44:12.960906   37292 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:44:12.961802   37292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:44:12.962374   37292 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:44:12.962388   37292 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:44:12.962989   37292 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:44:12.963004   37292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:44:12.965529   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:44:12.965804   37292 main.go:141] libmachine: domain test-preload-412971 has defined MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:44:12.965879   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:44:12.965916   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:44:12.966109   37292 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/test-preload-412971/id_rsa Username:docker}
	I1026 08:44:12.966224   37292 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b6:22:c7", ip: ""} in network mk-test-preload-412971: {Iface:virbr1 ExpiryTime:2025-10-26 09:43:55 +0000 UTC Type:0 Mac:52:54:00:b6:22:c7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:test-preload-412971 Clientid:01:52:54:00:b6:22:c7}
	I1026 08:44:12.966246   37292 main.go:141] libmachine: domain test-preload-412971 has defined IP address 192.168.39.123 and MAC address 52:54:00:b6:22:c7 in network mk-test-preload-412971
	I1026 08:44:12.966363   37292 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/test-preload-412971/id_rsa Username:docker}
	I1026 08:44:13.181765   37292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:44:13.213279   37292 node_ready.go:35] waiting up to 6m0s for node "test-preload-412971" to be "Ready" ...
	I1026 08:44:13.216127   37292 node_ready.go:49] node "test-preload-412971" is "Ready"
	I1026 08:44:13.216174   37292 node_ready.go:38] duration metric: took 2.815425ms for node "test-preload-412971" to be "Ready" ...
	I1026 08:44:13.216192   37292 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:44:13.216255   37292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:44:13.249391   37292 api_server.go:72] duration metric: took 291.526461ms to wait for apiserver process to appear ...
	I1026 08:44:13.249428   37292 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:44:13.249455   37292 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I1026 08:44:13.257636   37292 api_server.go:279] https://192.168.39.123:8443/healthz returned 200:
	ok
	I1026 08:44:13.259208   37292 api_server.go:141] control plane version: v1.32.0
	I1026 08:44:13.259229   37292 api_server.go:131] duration metric: took 9.793772ms to wait for apiserver health ...
	I1026 08:44:13.259237   37292 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:44:13.267386   37292 system_pods.go:59] 7 kube-system pods found
	I1026 08:44:13.267424   37292 system_pods.go:61] "coredns-668d6bf9bc-7gbd4" [b60fbfb3-2c2b-437d-9a09-3d38d43f9e67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:44:13.267434   37292 system_pods.go:61] "etcd-test-preload-412971" [635698cb-af54-41f9-b5da-28451b544f43] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:44:13.267446   37292 system_pods.go:61] "kube-apiserver-test-preload-412971" [1c518678-ab3d-4607-925a-76b2f7dcfd36] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:44:13.267454   37292 system_pods.go:61] "kube-controller-manager-test-preload-412971" [96c95e86-e467-4e38-910d-d91bab7dde4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:44:13.267462   37292 system_pods.go:61] "kube-proxy-2nwv8" [43757e2e-8646-484a-b189-e9e54f0722fe] Running
	I1026 08:44:13.267470   37292 system_pods.go:61] "kube-scheduler-test-preload-412971" [5ef0c645-12e6-4ecd-b87e-50cc20872658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:44:13.267478   37292 system_pods.go:61] "storage-provisioner" [6edc6fa4-10ee-41b0-8116-4028d3094a58] Running
	I1026 08:44:13.267487   37292 system_pods.go:74] duration metric: took 8.244356ms to wait for pod list to return data ...
	I1026 08:44:13.267501   37292 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:44:13.268355   37292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:44:13.273363   37292 default_sa.go:45] found service account: "default"
	I1026 08:44:13.273384   37292 default_sa.go:55] duration metric: took 5.876779ms for default service account to be created ...
	I1026 08:44:13.273392   37292 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:44:13.285375   37292 system_pods.go:86] 7 kube-system pods found
	I1026 08:44:13.285401   37292 system_pods.go:89] "coredns-668d6bf9bc-7gbd4" [b60fbfb3-2c2b-437d-9a09-3d38d43f9e67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:44:13.285409   37292 system_pods.go:89] "etcd-test-preload-412971" [635698cb-af54-41f9-b5da-28451b544f43] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:44:13.285416   37292 system_pods.go:89] "kube-apiserver-test-preload-412971" [1c518678-ab3d-4607-925a-76b2f7dcfd36] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:44:13.285422   37292 system_pods.go:89] "kube-controller-manager-test-preload-412971" [96c95e86-e467-4e38-910d-d91bab7dde4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:44:13.285426   37292 system_pods.go:89] "kube-proxy-2nwv8" [43757e2e-8646-484a-b189-e9e54f0722fe] Running
	I1026 08:44:13.285432   37292 system_pods.go:89] "kube-scheduler-test-preload-412971" [5ef0c645-12e6-4ecd-b87e-50cc20872658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:44:13.285437   37292 system_pods.go:89] "storage-provisioner" [6edc6fa4-10ee-41b0-8116-4028d3094a58] Running
	I1026 08:44:13.285449   37292 system_pods.go:126] duration metric: took 12.052172ms to wait for k8s-apps to be running ...
	I1026 08:44:13.285463   37292 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:44:13.285511   37292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:44:13.370147   37292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:44:13.449841   37292 system_svc.go:56] duration metric: took 164.373403ms WaitForService to wait for kubelet
	I1026 08:44:13.449876   37292 kubeadm.go:586] duration metric: took 492.019306ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:44:13.449893   37292 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:44:13.454646   37292 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 08:44:13.454679   37292 node_conditions.go:123] node cpu capacity is 2
	I1026 08:44:13.454692   37292 node_conditions.go:105] duration metric: took 4.794017ms to run NodePressure ...
	I1026 08:44:13.454705   37292 start.go:241] waiting for startup goroutines ...
	I1026 08:44:13.980137   37292 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1026 08:44:13.981593   37292 addons.go:514] duration metric: took 1.023713602s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1026 08:44:13.981625   37292 start.go:246] waiting for cluster config update ...
	I1026 08:44:13.981635   37292 start.go:255] writing updated cluster config ...
	I1026 08:44:13.981871   37292 ssh_runner.go:195] Run: rm -f paused
	I1026 08:44:13.988296   37292 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:44:13.988853   37292 kapi.go:59] client config for test-preload-412971: &rest.Config{Host:"https://192.168.39.123:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-9405/.minikube/profiles/test-preload-412971/client.key", CAFile:"/home/jenkins/minikube-integration/21772-9405/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:44:13.992878   37292 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-7gbd4" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 08:44:15.998577   37292 pod_ready.go:104] pod "coredns-668d6bf9bc-7gbd4" is not "Ready", error: <nil>
	W1026 08:44:17.999348   37292 pod_ready.go:104] pod "coredns-668d6bf9bc-7gbd4" is not "Ready", error: <nil>
	W1026 08:44:19.999938   37292 pod_ready.go:104] pod "coredns-668d6bf9bc-7gbd4" is not "Ready", error: <nil>
	W1026 08:44:22.499796   37292 pod_ready.go:104] pod "coredns-668d6bf9bc-7gbd4" is not "Ready", error: <nil>
	I1026 08:44:24.004702   37292 pod_ready.go:94] pod "coredns-668d6bf9bc-7gbd4" is "Ready"
	I1026 08:44:24.004736   37292 pod_ready.go:86] duration metric: took 10.011835039s for pod "coredns-668d6bf9bc-7gbd4" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:44:24.008016   37292 pod_ready.go:83] waiting for pod "etcd-test-preload-412971" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:44:24.021247   37292 pod_ready.go:94] pod "etcd-test-preload-412971" is "Ready"
	I1026 08:44:24.021272   37292 pod_ready.go:86] duration metric: took 13.226732ms for pod "etcd-test-preload-412971" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:44:24.023754   37292 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-412971" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:44:24.037962   37292 pod_ready.go:94] pod "kube-apiserver-test-preload-412971" is "Ready"
	I1026 08:44:24.037986   37292 pod_ready.go:86] duration metric: took 14.202611ms for pod "kube-apiserver-test-preload-412971" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:44:24.124741   37292 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-412971" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:44:24.196783   37292 pod_ready.go:94] pod "kube-controller-manager-test-preload-412971" is "Ready"
	I1026 08:44:24.196813   37292 pod_ready.go:86] duration metric: took 72.044781ms for pod "kube-controller-manager-test-preload-412971" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:44:24.397135   37292 pod_ready.go:83] waiting for pod "kube-proxy-2nwv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:44:24.797200   37292 pod_ready.go:94] pod "kube-proxy-2nwv8" is "Ready"
	I1026 08:44:24.797228   37292 pod_ready.go:86] duration metric: took 400.067463ms for pod "kube-proxy-2nwv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:44:24.996233   37292 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-412971" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:44:25.397720   37292 pod_ready.go:94] pod "kube-scheduler-test-preload-412971" is "Ready"
	I1026 08:44:25.397751   37292 pod_ready.go:86] duration metric: took 401.489249ms for pod "kube-scheduler-test-preload-412971" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:44:25.397765   37292 pod_ready.go:40] duration metric: took 11.409429248s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:44:25.441894   37292 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1026 08:44:25.443841   37292 out.go:203] 
	W1026 08:44:25.445192   37292 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1026 08:44:25.446597   37292 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1026 08:44:25.448063   37292 out.go:179] * Done! kubectl is now configured to use "test-preload-412971" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.262537217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761468266262512806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b54950dd-7bb2-400b-bea7-54bf0dc8d1e1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.263398789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=790731a9-3bd2-4436-99aa-7b68928cae74 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.263462045Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=790731a9-3bd2-4436-99aa-7b68928cae74 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.263619448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:313c2f4d1e581512f676c82b2f84d8c02cff9e02763ca9e791f20faec5e4236e,PodSandboxId:62ecae0bbec4aa2f93aea69bca7df95997f6c93f4490f43b9591429f54f3f906,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761468256004486325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7gbd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b60fbfb3-2c2b-437d-9a09-3d38d43f9e67,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38f5b29edb96c8577067ad9273a35c91c3cec53e2ce5217d746468d08a57932,PodSandboxId:b9b130f5afe4cef18174f5f15e51aa880920b913fa62aa6c5edb68e4cdc9312a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761468252430287103,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2nwv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43757e2e-8646-484a-b189-e9e54f0722fe,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0467366a69ed01fbbc4dcad1ab1b67a8bcc285fb87dc9f20587b7442ff912ecc,PodSandboxId:570407954e1cae48e245602106be5dc0f68575eb724d8698bba52402b62831b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761468252389477361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e
dc6fa4-10ee-41b0-8116-4028d3094a58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd3b81bb2e616800f01392114e70a37c3dd52161ffa0a632e2314496fc39bec,PodSandboxId:57791e41492d51c12a1fec3b8af56053bb5fbdaaedaf19cd98c7d15d187ab5e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761468248187204851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8f4aa29160503b2dfa0235fe3fa2d1,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e395edd7c4ecc8810c077ad76b1fa26de56c58c6cdecc4ef91d3771fc314f194,PodSandboxId:d1d5aa6d44f669ebc725c49718ffcb7f3df698ac45159d0bc07de70b5d145c0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761468248168978992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0aa0efdcb09066c1608f8d
0f86d975f,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca6173eeaa6b50de64ace1707692003d0cc90884900e3d05aa3515ce83717f11,PodSandboxId:0a57c41f74bd9c98b7d5d4a608128cbecbbce0a8f149b7659a7322d667292fa2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761468248129634684,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5436d3fe915887986cd85f7cb392f142,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ebaad7d1d4233f41b0a039e99e9b2b41e6e636f0a7c54dc47f8eee1a6a1bfee,PodSandboxId:0804864af1cfc228dd053241a9a57b210b7e750d79b382b059eddffc97e91c8e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761468248125415150,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e03c8dcb04102558715e5b77af84ecd7,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=790731a9-3bd2-4436-99aa-7b68928cae74 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.302254975Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81e5b8f1-25ef-4d46-aa9f-963086890b48 name=/runtime.v1.RuntimeService/Version
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.302403132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81e5b8f1-25ef-4d46-aa9f-963086890b48 name=/runtime.v1.RuntimeService/Version
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.303506563Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=449eae9a-e329-442c-897d-7c7e06e86472 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.303964490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761468266303944421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=449eae9a-e329-442c-897d-7c7e06e86472 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.304576784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04e024c1-a1aa-4a9c-86cf-747c72f8227f name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.304625570Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04e024c1-a1aa-4a9c-86cf-747c72f8227f name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.304797764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:313c2f4d1e581512f676c82b2f84d8c02cff9e02763ca9e791f20faec5e4236e,PodSandboxId:62ecae0bbec4aa2f93aea69bca7df95997f6c93f4490f43b9591429f54f3f906,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761468256004486325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7gbd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b60fbfb3-2c2b-437d-9a09-3d38d43f9e67,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38f5b29edb96c8577067ad9273a35c91c3cec53e2ce5217d746468d08a57932,PodSandboxId:b9b130f5afe4cef18174f5f15e51aa880920b913fa62aa6c5edb68e4cdc9312a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761468252430287103,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2nwv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43757e2e-8646-484a-b189-e9e54f0722fe,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0467366a69ed01fbbc4dcad1ab1b67a8bcc285fb87dc9f20587b7442ff912ecc,PodSandboxId:570407954e1cae48e245602106be5dc0f68575eb724d8698bba52402b62831b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761468252389477361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e
dc6fa4-10ee-41b0-8116-4028d3094a58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd3b81bb2e616800f01392114e70a37c3dd52161ffa0a632e2314496fc39bec,PodSandboxId:57791e41492d51c12a1fec3b8af56053bb5fbdaaedaf19cd98c7d15d187ab5e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761468248187204851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8f4aa29160503b2dfa0235fe3fa2d1,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e395edd7c4ecc8810c077ad76b1fa26de56c58c6cdecc4ef91d3771fc314f194,PodSandboxId:d1d5aa6d44f669ebc725c49718ffcb7f3df698ac45159d0bc07de70b5d145c0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761468248168978992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0aa0efdcb09066c1608f8d
0f86d975f,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca6173eeaa6b50de64ace1707692003d0cc90884900e3d05aa3515ce83717f11,PodSandboxId:0a57c41f74bd9c98b7d5d4a608128cbecbbce0a8f149b7659a7322d667292fa2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761468248129634684,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5436d3fe915887986cd85f7cb392f142,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ebaad7d1d4233f41b0a039e99e9b2b41e6e636f0a7c54dc47f8eee1a6a1bfee,PodSandboxId:0804864af1cfc228dd053241a9a57b210b7e750d79b382b059eddffc97e91c8e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761468248125415150,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e03c8dcb04102558715e5b77af84ecd7,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04e024c1-a1aa-4a9c-86cf-747c72f8227f name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.343901394Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e1cbf74-f5e3-4e17-8cb1-25c95f9e66b1 name=/runtime.v1.RuntimeService/Version
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.344253501Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e1cbf74-f5e3-4e17-8cb1-25c95f9e66b1 name=/runtime.v1.RuntimeService/Version
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.345856812Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de1ca6d7-f3e0-476d-ab2c-62923c6bbc1d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.346804923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761468266346779314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de1ca6d7-f3e0-476d-ab2c-62923c6bbc1d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.347692518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68b08bbc-9b50-4e14-b4e1-95fa6f535c37 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.347738797Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68b08bbc-9b50-4e14-b4e1-95fa6f535c37 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.347880346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:313c2f4d1e581512f676c82b2f84d8c02cff9e02763ca9e791f20faec5e4236e,PodSandboxId:62ecae0bbec4aa2f93aea69bca7df95997f6c93f4490f43b9591429f54f3f906,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761468256004486325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7gbd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b60fbfb3-2c2b-437d-9a09-3d38d43f9e67,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38f5b29edb96c8577067ad9273a35c91c3cec53e2ce5217d746468d08a57932,PodSandboxId:b9b130f5afe4cef18174f5f15e51aa880920b913fa62aa6c5edb68e4cdc9312a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761468252430287103,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2nwv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43757e2e-8646-484a-b189-e9e54f0722fe,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0467366a69ed01fbbc4dcad1ab1b67a8bcc285fb87dc9f20587b7442ff912ecc,PodSandboxId:570407954e1cae48e245602106be5dc0f68575eb724d8698bba52402b62831b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761468252389477361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e
dc6fa4-10ee-41b0-8116-4028d3094a58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd3b81bb2e616800f01392114e70a37c3dd52161ffa0a632e2314496fc39bec,PodSandboxId:57791e41492d51c12a1fec3b8af56053bb5fbdaaedaf19cd98c7d15d187ab5e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761468248187204851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8f4aa29160503b2dfa0235fe3fa2d1,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e395edd7c4ecc8810c077ad76b1fa26de56c58c6cdecc4ef91d3771fc314f194,PodSandboxId:d1d5aa6d44f669ebc725c49718ffcb7f3df698ac45159d0bc07de70b5d145c0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761468248168978992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0aa0efdcb09066c1608f8d
0f86d975f,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca6173eeaa6b50de64ace1707692003d0cc90884900e3d05aa3515ce83717f11,PodSandboxId:0a57c41f74bd9c98b7d5d4a608128cbecbbce0a8f149b7659a7322d667292fa2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761468248129634684,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5436d3fe915887986cd85f7cb392f142,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ebaad7d1d4233f41b0a039e99e9b2b41e6e636f0a7c54dc47f8eee1a6a1bfee,PodSandboxId:0804864af1cfc228dd053241a9a57b210b7e750d79b382b059eddffc97e91c8e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761468248125415150,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e03c8dcb04102558715e5b77af84ecd7,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68b08bbc-9b50-4e14-b4e1-95fa6f535c37 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.384872844Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5fb9219-cdf8-4361-973b-48812e7bc1c2 name=/runtime.v1.RuntimeService/Version
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.384957540Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5fb9219-cdf8-4361-973b-48812e7bc1c2 name=/runtime.v1.RuntimeService/Version
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.387130087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=465bda1f-a8d5-475e-bd29-05da69b87b5e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.388184609Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761468266388158328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=465bda1f-a8d5-475e-bd29-05da69b87b5e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.388714294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d52e647-9a8a-4b3d-8432-55ffdc723745 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.388842679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d52e647-9a8a-4b3d-8432-55ffdc723745 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 08:44:26 test-preload-412971 crio[835]: time="2025-10-26 08:44:26.389507251Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:313c2f4d1e581512f676c82b2f84d8c02cff9e02763ca9e791f20faec5e4236e,PodSandboxId:62ecae0bbec4aa2f93aea69bca7df95997f6c93f4490f43b9591429f54f3f906,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761468256004486325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7gbd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b60fbfb3-2c2b-437d-9a09-3d38d43f9e67,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38f5b29edb96c8577067ad9273a35c91c3cec53e2ce5217d746468d08a57932,PodSandboxId:b9b130f5afe4cef18174f5f15e51aa880920b913fa62aa6c5edb68e4cdc9312a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761468252430287103,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2nwv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43757e2e-8646-484a-b189-e9e54f0722fe,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0467366a69ed01fbbc4dcad1ab1b67a8bcc285fb87dc9f20587b7442ff912ecc,PodSandboxId:570407954e1cae48e245602106be5dc0f68575eb724d8698bba52402b62831b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761468252389477361,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e
dc6fa4-10ee-41b0-8116-4028d3094a58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd3b81bb2e616800f01392114e70a37c3dd52161ffa0a632e2314496fc39bec,PodSandboxId:57791e41492d51c12a1fec3b8af56053bb5fbdaaedaf19cd98c7d15d187ab5e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761468248187204851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8f4aa29160503b2dfa0235fe3fa2d1,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e395edd7c4ecc8810c077ad76b1fa26de56c58c6cdecc4ef91d3771fc314f194,PodSandboxId:d1d5aa6d44f669ebc725c49718ffcb7f3df698ac45159d0bc07de70b5d145c0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761468248168978992,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0aa0efdcb09066c1608f8d
0f86d975f,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca6173eeaa6b50de64ace1707692003d0cc90884900e3d05aa3515ce83717f11,PodSandboxId:0a57c41f74bd9c98b7d5d4a608128cbecbbce0a8f149b7659a7322d667292fa2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761468248129634684,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5436d3fe915887986cd85f7cb392f142,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ebaad7d1d4233f41b0a039e99e9b2b41e6e636f0a7c54dc47f8eee1a6a1bfee,PodSandboxId:0804864af1cfc228dd053241a9a57b210b7e750d79b382b059eddffc97e91c8e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761468248125415150,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-412971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e03c8dcb04102558715e5b77af84ecd7,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d52e647-9a8a-4b3d-8432-55ffdc723745 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	313c2f4d1e581       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   10 seconds ago      Running             coredns                   1                   62ecae0bbec4a       coredns-668d6bf9bc-7gbd4
	a38f5b29edb96       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   14 seconds ago      Running             kube-proxy                1                   b9b130f5afe4c       kube-proxy-2nwv8
	0467366a69ed0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       2                   570407954e1ca       storage-provisioner
	6fd3b81bb2e61       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   18 seconds ago      Running             etcd                      1                   57791e41492d5       etcd-test-preload-412971
	e395edd7c4ecc       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   18 seconds ago      Running             kube-controller-manager   1                   d1d5aa6d44f66       kube-controller-manager-test-preload-412971
	ca6173eeaa6b5       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   18 seconds ago      Running             kube-scheduler            1                   0a57c41f74bd9       kube-scheduler-test-preload-412971
	1ebaad7d1d423       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   18 seconds ago      Running             kube-apiserver            1                   0804864af1cfc       kube-apiserver-test-preload-412971
	
	
	==> coredns [313c2f4d1e581512f676c82b2f84d8c02cff9e02763ca9e791f20faec5e4236e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41947 - 63874 "HINFO IN 6918626725919350201.6820397131749432991. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.07751423s
	
	
	==> describe nodes <==
	Name:               test-preload-412971
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-412971
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=test-preload-412971
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_42_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:42:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-412971
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:44:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:44:12 +0000   Sun, 26 Oct 2025 08:42:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:44:12 +0000   Sun, 26 Oct 2025 08:42:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:44:12 +0000   Sun, 26 Oct 2025 08:42:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:44:12 +0000   Sun, 26 Oct 2025 08:44:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    test-preload-412971
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 15706c8fa67a4537bb95853e91adf749
	  System UUID:                15706c8f-a67a-4537-bb95-853e91adf749
	  Boot ID:                    100a3c2c-7461-44de-9146-6b466313f528
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-7gbd4                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     99s
	  kube-system                 etcd-test-preload-412971                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         104s
	  kube-system                 kube-apiserver-test-preload-412971             250m (12%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-test-preload-412971    200m (10%)    0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-proxy-2nwv8                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-scheduler-test-preload-412971             100m (5%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 97s                kube-proxy       
	  Normal   Starting                 13s                kube-proxy       
	  Normal   NodeHasSufficientMemory  104s               kubelet          Node test-preload-412971 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  104s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    104s               kubelet          Node test-preload-412971 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     104s               kubelet          Node test-preload-412971 status is now: NodeHasSufficientPID
	  Normal   Starting                 104s               kubelet          Starting kubelet.
	  Normal   NodeReady                103s               kubelet          Node test-preload-412971 status is now: NodeReady
	  Normal   RegisteredNode           100s               node-controller  Node test-preload-412971 event: Registered Node test-preload-412971 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node test-preload-412971 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node test-preload-412971 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node test-preload-412971 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 15s                kubelet          Node test-preload-412971 has been rebooted, boot id: 100a3c2c-7461-44de-9146-6b466313f528
	  Normal   RegisteredNode           12s                node-controller  Node test-preload-412971 event: Registered Node test-preload-412971 in Controller
	
	
	==> dmesg <==
	[Oct26 08:43] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000053] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004888] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.958111] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000005] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct26 08:44] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.096262] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.494541] kauditd_printk_skb: 177 callbacks suppressed
	[  +8.155075] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [6fd3b81bb2e616800f01392114e70a37c3dd52161ffa0a632e2314496fc39bec] <==
	{"level":"info","ts":"2025-10-26T08:44:08.573567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e switched to configuration voters=(5520126547342350622)"}
	{"level":"info","ts":"2025-10-26T08:44:08.578697Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b780dcaae8448687","local-member-id":"4c9b6dd9118b591e","added-peer-id":"4c9b6dd9118b591e","added-peer-peer-urls":["https://192.168.39.123:2380"]}
	{"level":"info","ts":"2025-10-26T08:44:08.581532Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b780dcaae8448687","local-member-id":"4c9b6dd9118b591e","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T08:44:08.581644Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T08:44:08.584843Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-26T08:44:08.593630Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"4c9b6dd9118b591e","initial-advertise-peer-urls":["https://192.168.39.123:2380"],"listen-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-26T08:44:08.593716Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-26T08:44:08.593816Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2025-10-26T08:44:08.597340Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2025-10-26T08:44:09.932176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-26T08:44:09.932216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-26T08:44:09.932253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgPreVoteResp from 4c9b6dd9118b591e at term 2"}
	{"level":"info","ts":"2025-10-26T08:44:09.932266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became candidate at term 3"}
	{"level":"info","ts":"2025-10-26T08:44:09.932275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgVoteResp from 4c9b6dd9118b591e at term 3"}
	{"level":"info","ts":"2025-10-26T08:44:09.932283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became leader at term 3"}
	{"level":"info","ts":"2025-10-26T08:44:09.932333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4c9b6dd9118b591e elected leader 4c9b6dd9118b591e at term 3"}
	{"level":"info","ts":"2025-10-26T08:44:09.934242Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T08:44:09.934438Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T08:44:09.934240Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"4c9b6dd9118b591e","local-member-attributes":"{Name:test-preload-412971 ClientURLs:[https://192.168.39.123:2379]}","request-path":"/0/members/4c9b6dd9118b591e/attributes","cluster-id":"b780dcaae8448687","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T08:44:09.934964Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T08:44:09.934993Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-26T08:44:09.935544Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-26T08:44:09.936152Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.123:2379"}
	{"level":"info","ts":"2025-10-26T08:44:09.935551Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-26T08:44:09.936804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 08:44:26 up 0 min,  0 users,  load average: 1.11, 0.28, 0.10
	Linux test-preload-412971 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1ebaad7d1d4233f41b0a039e99e9b2b41e6e636f0a7c54dc47f8eee1a6a1bfee] <==
	I1026 08:44:11.082971       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 08:44:11.083483       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1026 08:44:11.089220       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 08:44:11.089247       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 08:44:11.089889       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 08:44:11.091447       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 08:44:11.110102       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 08:44:11.112784       1 aggregator.go:171] initial CRD sync complete...
	I1026 08:44:11.112886       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 08:44:11.112914       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 08:44:11.112929       1 cache.go:39] Caches are synced for autoregister controller
	I1026 08:44:11.153583       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1026 08:44:11.156578       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1026 08:44:11.156612       1 policy_source.go:240] refreshing policies
	E1026 08:44:11.157589       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 08:44:11.213767       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:44:11.995972       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:44:12.035654       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1026 08:44:12.765904       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1026 08:44:12.803273       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1026 08:44:12.837034       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:44:12.854848       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:44:14.498214       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:44:14.648373       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1026 08:44:14.697467       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [e395edd7c4ecc8810c077ad76b1fa26de56c58c6cdecc4ef91d3771fc314f194] <==
	I1026 08:44:14.302280       1 shared_informer.go:320] Caches are synced for resource quota
	I1026 08:44:14.311672       1 shared_informer.go:320] Caches are synced for crt configmap
	I1026 08:44:14.314026       1 shared_informer.go:320] Caches are synced for node
	I1026 08:44:14.314188       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 08:44:14.314214       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 08:44:14.314219       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1026 08:44:14.314225       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1026 08:44:14.314343       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-412971"
	I1026 08:44:14.320329       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1026 08:44:14.327604       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1026 08:44:14.327669       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1026 08:44:14.328867       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1026 08:44:14.328949       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1026 08:44:14.340345       1 shared_informer.go:320] Caches are synced for garbage collector
	I1026 08:44:14.344506       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1026 08:44:14.346809       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1026 08:44:14.346874       1 shared_informer.go:320] Caches are synced for HPA
	I1026 08:44:14.348058       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1026 08:44:14.352033       1 shared_informer.go:320] Caches are synced for endpoint
	I1026 08:44:14.357386       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1026 08:44:14.654627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="307.723647ms"
	I1026 08:44:14.655701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="104.314µs"
	I1026 08:44:16.136582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="47.167µs"
	I1026 08:44:23.999016       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.147518ms"
	I1026 08:44:23.999183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="48.972µs"
	
	
	==> kube-proxy [a38f5b29edb96c8577067ad9273a35c91c3cec53e2ce5217d746468d08a57932] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 08:44:12.617489       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 08:44:12.646857       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	E1026 08:44:12.646934       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:44:12.692691       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1026 08:44:12.692768       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 08:44:12.692793       1 server_linux.go:170] "Using iptables Proxier"
	I1026 08:44:12.695622       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:44:12.696030       1 server.go:497] "Version info" version="v1.32.0"
	I1026 08:44:12.696083       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:44:12.698698       1 config.go:199] "Starting service config controller"
	I1026 08:44:12.698789       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 08:44:12.698889       1 config.go:105] "Starting endpoint slice config controller"
	I1026 08:44:12.698907       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 08:44:12.699974       1 config.go:329] "Starting node config controller"
	I1026 08:44:12.699997       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 08:44:12.799325       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 08:44:12.800027       1 shared_informer.go:320] Caches are synced for node config
	I1026 08:44:12.800095       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [ca6173eeaa6b50de64ace1707692003d0cc90884900e3d05aa3515ce83717f11] <==
	I1026 08:44:08.973815       1 serving.go:386] Generated self-signed cert in-memory
	W1026 08:44:11.069249       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 08:44:11.069405       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 08:44:11.069433       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 08:44:11.069512       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 08:44:11.121077       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1026 08:44:11.121118       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:44:11.128259       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:44:11.128408       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 08:44:11.130509       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1026 08:44:11.130615       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 08:44:11.228734       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 08:44:11 test-preload-412971 kubelet[1158]: E1026 08:44:11.272422    1158 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-412971\" already exists" pod="kube-system/kube-controller-manager-test-preload-412971"
	Oct 26 08:44:11 test-preload-412971 kubelet[1158]: I1026 08:44:11.272645    1158 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-412971"
	Oct 26 08:44:11 test-preload-412971 kubelet[1158]: E1026 08:44:11.281670    1158 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-412971\" already exists" pod="kube-system/kube-scheduler-test-preload-412971"
	Oct 26 08:44:11 test-preload-412971 kubelet[1158]: I1026 08:44:11.281709    1158 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-412971"
	Oct 26 08:44:11 test-preload-412971 kubelet[1158]: E1026 08:44:11.289587    1158 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-412971\" already exists" pod="kube-system/etcd-test-preload-412971"
	Oct 26 08:44:11 test-preload-412971 kubelet[1158]: I1026 08:44:11.289611    1158 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-412971"
	Oct 26 08:44:11 test-preload-412971 kubelet[1158]: E1026 08:44:11.297458    1158 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-412971\" already exists" pod="kube-system/kube-apiserver-test-preload-412971"
	Oct 26 08:44:11 test-preload-412971 kubelet[1158]: I1026 08:44:11.958382    1158 apiserver.go:52] "Watching apiserver"
	Oct 26 08:44:11 test-preload-412971 kubelet[1158]: E1026 08:44:11.965125    1158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-7gbd4" podUID="b60fbfb3-2c2b-437d-9a09-3d38d43f9e67"
	Oct 26 08:44:11 test-preload-412971 kubelet[1158]: I1026 08:44:11.982546    1158 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 26 08:44:12 test-preload-412971 kubelet[1158]: I1026 08:44:12.028451    1158 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43757e2e-8646-484a-b189-e9e54f0722fe-lib-modules\") pod \"kube-proxy-2nwv8\" (UID: \"43757e2e-8646-484a-b189-e9e54f0722fe\") " pod="kube-system/kube-proxy-2nwv8"
	Oct 26 08:44:12 test-preload-412971 kubelet[1158]: I1026 08:44:12.028523    1158 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6edc6fa4-10ee-41b0-8116-4028d3094a58-tmp\") pod \"storage-provisioner\" (UID: \"6edc6fa4-10ee-41b0-8116-4028d3094a58\") " pod="kube-system/storage-provisioner"
	Oct 26 08:44:12 test-preload-412971 kubelet[1158]: I1026 08:44:12.028572    1158 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43757e2e-8646-484a-b189-e9e54f0722fe-xtables-lock\") pod \"kube-proxy-2nwv8\" (UID: \"43757e2e-8646-484a-b189-e9e54f0722fe\") " pod="kube-system/kube-proxy-2nwv8"
	Oct 26 08:44:12 test-preload-412971 kubelet[1158]: E1026 08:44:12.028678    1158 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 26 08:44:12 test-preload-412971 kubelet[1158]: E1026 08:44:12.028788    1158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b60fbfb3-2c2b-437d-9a09-3d38d43f9e67-config-volume podName:b60fbfb3-2c2b-437d-9a09-3d38d43f9e67 nodeName:}" failed. No retries permitted until 2025-10-26 08:44:12.528759084 +0000 UTC m=+6.676957042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b60fbfb3-2c2b-437d-9a09-3d38d43f9e67-config-volume") pod "coredns-668d6bf9bc-7gbd4" (UID: "b60fbfb3-2c2b-437d-9a09-3d38d43f9e67") : object "kube-system"/"coredns" not registered
	Oct 26 08:44:12 test-preload-412971 kubelet[1158]: E1026 08:44:12.532906    1158 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 26 08:44:12 test-preload-412971 kubelet[1158]: E1026 08:44:12.533220    1158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b60fbfb3-2c2b-437d-9a09-3d38d43f9e67-config-volume podName:b60fbfb3-2c2b-437d-9a09-3d38d43f9e67 nodeName:}" failed. No retries permitted until 2025-10-26 08:44:13.533198977 +0000 UTC m=+7.681396927 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b60fbfb3-2c2b-437d-9a09-3d38d43f9e67-config-volume") pod "coredns-668d6bf9bc-7gbd4" (UID: "b60fbfb3-2c2b-437d-9a09-3d38d43f9e67") : object "kube-system"/"coredns" not registered
	Oct 26 08:44:12 test-preload-412971 kubelet[1158]: I1026 08:44:12.841046    1158 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 26 08:44:13 test-preload-412971 kubelet[1158]: E1026 08:44:13.543583    1158 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 26 08:44:13 test-preload-412971 kubelet[1158]: E1026 08:44:13.543682    1158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b60fbfb3-2c2b-437d-9a09-3d38d43f9e67-config-volume podName:b60fbfb3-2c2b-437d-9a09-3d38d43f9e67 nodeName:}" failed. No retries permitted until 2025-10-26 08:44:15.543668052 +0000 UTC m=+9.691866001 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b60fbfb3-2c2b-437d-9a09-3d38d43f9e67-config-volume") pod "coredns-668d6bf9bc-7gbd4" (UID: "b60fbfb3-2c2b-437d-9a09-3d38d43f9e67") : object "kube-system"/"coredns" not registered
	Oct 26 08:44:16 test-preload-412971 kubelet[1158]: E1026 08:44:16.055463    1158 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761468256054482593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 08:44:16 test-preload-412971 kubelet[1158]: E1026 08:44:16.055524    1158 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761468256054482593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 08:44:23 test-preload-412971 kubelet[1158]: I1026 08:44:23.960187    1158 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 08:44:26 test-preload-412971 kubelet[1158]: E1026 08:44:26.060994    1158 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761468266060624737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 08:44:26 test-preload-412971 kubelet[1158]: E1026 08:44:26.061017    1158 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761468266060624737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0467366a69ed01fbbc4dcad1ab1b67a8bcc285fb87dc9f20587b7442ff912ecc] <==
	I1026 08:44:12.491944       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-412971 -n test-preload-412971
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-412971 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-412971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-412971
--- FAIL: TestPreload (153.39s)

                                                
                                    

Test pass (287/329)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 22.55
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 10.61
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.63
22 TestOffline 101.92
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 203.16
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 10.52
35 TestAddons/parallel/Registry 19.28
36 TestAddons/parallel/RegistryCreds 0.62
38 TestAddons/parallel/InspektorGadget 6.3
39 TestAddons/parallel/MetricsServer 6.74
41 TestAddons/parallel/CSI 61.2
42 TestAddons/parallel/Headlamp 22.2
43 TestAddons/parallel/CloudSpanner 5.73
44 TestAddons/parallel/LocalPath 58.99
45 TestAddons/parallel/NvidiaDevicePlugin 6.94
46 TestAddons/parallel/Yakd 11.77
48 TestAddons/StoppedEnableDisable 85.83
49 TestCertOptions 73.3
50 TestCertExpiration 301.67
52 TestForceSystemdFlag 74.08
53 TestForceSystemdEnv 59.65
58 TestErrorSpam/setup 37.45
59 TestErrorSpam/start 0.32
60 TestErrorSpam/status 0.65
61 TestErrorSpam/pause 1.48
62 TestErrorSpam/unpause 1.72
63 TestErrorSpam/stop 5.07
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 52.45
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 33.72
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.12
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
75 TestFunctional/serial/CacheCmd/cache/add_local 2.09
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 289.13
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.49
86 TestFunctional/serial/LogsFileCmd 1.47
87 TestFunctional/serial/InvalidService 4.36
89 TestFunctional/parallel/ConfigCmd 0.39
90 TestFunctional/parallel/DashboardCmd 19.48
91 TestFunctional/parallel/DryRun 0.24
92 TestFunctional/parallel/InternationalLanguage 0.13
93 TestFunctional/parallel/StatusCmd 0.63
97 TestFunctional/parallel/ServiceCmdConnect 9.42
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 45.91
101 TestFunctional/parallel/SSHCmd 0.34
102 TestFunctional/parallel/CpCmd 1.09
103 TestFunctional/parallel/MySQL 31.27
104 TestFunctional/parallel/FileSync 0.18
105 TestFunctional/parallel/CertSync 1.35
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
113 TestFunctional/parallel/License 0.31
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.17
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
116 TestFunctional/parallel/ProfileCmd/profile_list 0.29
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
118 TestFunctional/parallel/MountCmd/any-port 7.68
119 TestFunctional/parallel/ServiceCmd/List 0.28
120 TestFunctional/parallel/MountCmd/specific-port 1.78
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
132 TestFunctional/parallel/Version/short 0.06
133 TestFunctional/parallel/Version/components 0.7
134 TestFunctional/parallel/ServiceCmd/Format 0.35
135 TestFunctional/parallel/ServiceCmd/URL 0.35
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
138 TestFunctional/parallel/ImageCommands/ImageListJson 1.39
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
140 TestFunctional/parallel/ImageCommands/ImageBuild 6.85
141 TestFunctional/parallel/ImageCommands/Setup 1.78
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.61
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.77
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.83
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.74
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.74
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.67
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 203.53
161 TestMultiControlPlane/serial/DeployApp 7.61
162 TestMultiControlPlane/serial/PingHostFromPods 1.27
163 TestMultiControlPlane/serial/AddWorkerNode 43.81
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.68
166 TestMultiControlPlane/serial/CopyFile 10.54
167 TestMultiControlPlane/serial/StopSecondaryNode 83.66
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.49
169 TestMultiControlPlane/serial/RestartSecondaryNode 42.77
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 375.68
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.04
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
174 TestMultiControlPlane/serial/StopCluster 231.21
175 TestMultiControlPlane/serial/RestartCluster 92.82
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.48
177 TestMultiControlPlane/serial/AddSecondaryNode 85.21
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.66
182 TestJSONOutput/start/Command 79.95
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.73
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.62
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 6.83
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.23
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 77.64
214 TestMountStart/serial/StartWithMountFirst 23.11
215 TestMountStart/serial/VerifyMountFirst 0.3
216 TestMountStart/serial/StartWithMountSecond 20.96
217 TestMountStart/serial/VerifyMountSecond 0.29
218 TestMountStart/serial/DeleteFirst 0.68
219 TestMountStart/serial/VerifyMountPostDelete 0.29
220 TestMountStart/serial/Stop 1.2
221 TestMountStart/serial/RestartStopped 18.61
222 TestMountStart/serial/VerifyMountPostStop 0.31
225 TestMultiNode/serial/FreshStart2Nodes 98.28
226 TestMultiNode/serial/DeployApp2Nodes 6.15
227 TestMultiNode/serial/PingHostFrom2Pods 0.82
228 TestMultiNode/serial/AddNode 70.41
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.44
231 TestMultiNode/serial/CopyFile 5.89
232 TestMultiNode/serial/StopNode 2.13
233 TestMultiNode/serial/StartAfterStop 39.64
234 TestMultiNode/serial/RestartKeepsNodes 302.28
235 TestMultiNode/serial/DeleteNode 2.53
236 TestMultiNode/serial/StopMultiNode 166.32
237 TestMultiNode/serial/RestartMultiNode 84.37
238 TestMultiNode/serial/ValidateNameConflict 40.29
245 TestScheduledStopUnix 106.82
249 TestRunningBinaryUpgrade 117.74
251 TestKubernetesUpgrade 151.66
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 76.44
256 TestNoKubernetes/serial/StartWithStopK8s 27.59
260 TestNoKubernetes/serial/Start 23.9
265 TestNetworkPlugins/group/false 4.98
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.15
270 TestNoKubernetes/serial/ProfileList 0.64
271 TestNoKubernetes/serial/Stop 1.2
272 TestNoKubernetes/serial/StartNoArgs 56.51
273 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
282 TestPause/serial/Start 88.1
283 TestStoppedBinaryUpgrade/Setup 2.76
284 TestStoppedBinaryUpgrade/Upgrade 123.8
285 TestNetworkPlugins/group/auto/Start 91.91
286 TestPause/serial/SecondStartNoReconfiguration 39.47
287 TestPause/serial/Pause 0.78
288 TestPause/serial/VerifyStatus 0.21
289 TestPause/serial/Unpause 0.72
290 TestPause/serial/PauseAgain 0.91
291 TestPause/serial/DeletePaused 0.89
292 TestStoppedBinaryUpgrade/MinikubeLogs 1.08
293 TestPause/serial/VerifyDeletedResources 0.6
294 TestNetworkPlugins/group/kindnet/Start 61.41
295 TestNetworkPlugins/group/calico/Start 93.28
296 TestNetworkPlugins/group/auto/KubeletFlags 0.18
297 TestNetworkPlugins/group/auto/NetCatPod 12.24
298 TestNetworkPlugins/group/auto/DNS 0.15
299 TestNetworkPlugins/group/auto/Localhost 0.15
300 TestNetworkPlugins/group/auto/HairPin 0.15
301 TestNetworkPlugins/group/custom-flannel/Start 73.19
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
304 TestNetworkPlugins/group/kindnet/NetCatPod 10.33
305 TestNetworkPlugins/group/kindnet/DNS 0.15
306 TestNetworkPlugins/group/kindnet/Localhost 0.14
307 TestNetworkPlugins/group/kindnet/HairPin 0.13
308 TestNetworkPlugins/group/enable-default-cni/Start 85.86
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.2
311 TestNetworkPlugins/group/calico/NetCatPod 12.26
312 TestNetworkPlugins/group/calico/DNS 0.15
313 TestNetworkPlugins/group/calico/Localhost 0.12
314 TestNetworkPlugins/group/calico/HairPin 0.13
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.31
317 TestNetworkPlugins/group/flannel/Start 68.4
318 TestNetworkPlugins/group/bridge/Start 109.6
319 TestNetworkPlugins/group/custom-flannel/DNS 0.17
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
323 TestStartStop/group/old-k8s-version/serial/FirstStart 118.91
324 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
325 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.12
326 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
327 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
328 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
329 TestNetworkPlugins/group/flannel/ControllerPod 6.01
330 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
331 TestNetworkPlugins/group/flannel/NetCatPod 11.26
333 TestStartStop/group/no-preload/serial/FirstStart 97.77
334 TestNetworkPlugins/group/flannel/DNS 0.15
335 TestNetworkPlugins/group/flannel/Localhost 0.13
336 TestNetworkPlugins/group/flannel/HairPin 0.14
338 TestStartStop/group/embed-certs/serial/FirstStart 60.39
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
340 TestNetworkPlugins/group/bridge/NetCatPod 11.28
341 TestNetworkPlugins/group/bridge/DNS 0.17
342 TestNetworkPlugins/group/bridge/Localhost 0.16
343 TestNetworkPlugins/group/bridge/HairPin 0.18
344 TestStartStop/group/old-k8s-version/serial/DeployApp 11.43
346 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84
347 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.5
348 TestStartStop/group/old-k8s-version/serial/Stop 85.58
349 TestStartStop/group/embed-certs/serial/DeployApp 12.3
350 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
351 TestStartStop/group/embed-certs/serial/Stop 81.5
352 TestStartStop/group/no-preload/serial/DeployApp 11.3
353 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
354 TestStartStop/group/no-preload/serial/Stop 89.42
355 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.26
356 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.94
357 TestStartStop/group/default-k8s-diff-port/serial/Stop 86.1
358 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
359 TestStartStop/group/old-k8s-version/serial/SecondStart 41.13
360 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
361 TestStartStop/group/embed-certs/serial/SecondStart 44.01
362 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
363 TestStartStop/group/no-preload/serial/SecondStart 60.26
364 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 14.01
365 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
366 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
367 TestStartStop/group/old-k8s-version/serial/Pause 2.8
368 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10
370 TestStartStop/group/newest-cni/serial/FirstStart 46.4
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
372 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.2
373 TestStartStop/group/embed-certs/serial/Pause 2.71
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
375 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.82
376 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12
377 TestStartStop/group/newest-cni/serial/DeployApp 0
378 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.55
379 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
380 TestStartStop/group/newest-cni/serial/Stop 7.93
381 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.2
382 TestStartStop/group/no-preload/serial/Pause 2.81
383 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
384 TestStartStop/group/newest-cni/serial/SecondStart 32.95
385 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
386 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
387 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
388 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.25
389 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
392 TestStartStop/group/newest-cni/serial/Pause 2.27
x
+
TestDownloadOnly/v1.28.0/json-events (22.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-810456 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-810456 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.545605461s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (22.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1026 07:47:46.340790   13321 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1026 07:47:46.340904   13321 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9405/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-810456
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-810456: exit status 85 (70.884375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-810456 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-810456 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 07:47:23
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 07:47:23.845154   13333 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:47:23.845375   13333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:47:23.845383   13333 out.go:374] Setting ErrFile to fd 2...
	I1026 07:47:23.845386   13333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:47:23.845589   13333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
	W1026 07:47:23.845826   13333 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21772-9405/.minikube/config/config.json: open /home/jenkins/minikube-integration/21772-9405/.minikube/config/config.json: no such file or directory
	I1026 07:47:23.846900   13333 out.go:368] Setting JSON to true
	I1026 07:47:23.847745   13333 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1788,"bootTime":1761463056,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 07:47:23.847833   13333 start.go:141] virtualization: kvm guest
	I1026 07:47:23.849942   13333 out.go:99] [download-only-810456] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 07:47:23.850062   13333 notify.go:220] Checking for updates...
	W1026 07:47:23.850070   13333 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21772-9405/.minikube/cache/preloaded-tarball: no such file or directory
	I1026 07:47:23.851162   13333 out.go:171] MINIKUBE_LOCATION=21772
	I1026 07:47:23.852893   13333 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 07:47:23.854082   13333 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21772-9405/kubeconfig
	I1026 07:47:23.855118   13333 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9405/.minikube
	I1026 07:47:23.856205   13333 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1026 07:47:23.858253   13333 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 07:47:23.858452   13333 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 07:47:24.322805   13333 out.go:99] Using the kvm2 driver based on user configuration
	I1026 07:47:24.322848   13333 start.go:305] selected driver: kvm2
	I1026 07:47:24.322857   13333 start.go:925] validating driver "kvm2" against <nil>
	I1026 07:47:24.323335   13333 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 07:47:24.323894   13333 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1026 07:47:24.324076   13333 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 07:47:24.324126   13333 cni.go:84] Creating CNI manager for ""
	I1026 07:47:24.324196   13333 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 07:47:24.324210   13333 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 07:47:24.324268   13333 start.go:349] cluster config:
	{Name:download-only-810456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-810456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 07:47:24.324508   13333 iso.go:125] acquiring lock: {Name:mk96f67d8329fb7692bdfa7d5182ebbf9e1ba018 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 07:47:24.326063   13333 out.go:99] Downloading VM boot image ...
	I1026 07:47:24.326115   13333 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21772-9405/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1026 07:47:34.276594   13333 out.go:99] Starting "download-only-810456" primary control-plane node in "download-only-810456" cluster
	I1026 07:47:34.276615   13333 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 07:47:34.374128   13333 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1026 07:47:34.374160   13333 cache.go:58] Caching tarball of preloaded images
	I1026 07:47:34.374318   13333 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 07:47:34.375956   13333 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1026 07:47:34.375975   13333 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1026 07:47:34.473843   13333 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1026 07:47:34.474005   13333 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21772-9405/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-810456 host does not exist
	  To start a cluster, run: "minikube start -p download-only-810456"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-810456
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (10.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-666462 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-666462 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.612761005s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (10.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1026 07:47:57.326815   13321 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1026 07:47:57.326856   13321 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9405/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-666462
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-666462: exit status 85 (72.957765ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-810456 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-810456 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ delete  │ -p download-only-810456                                                                                                                                                 │ download-only-810456 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ start   │ -o=json --download-only -p download-only-666462 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-666462 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 07:47:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 07:47:46.766042   13588 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:47:46.766330   13588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:47:46.766342   13588 out.go:374] Setting ErrFile to fd 2...
	I1026 07:47:46.766348   13588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:47:46.766623   13588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
	I1026 07:47:46.767245   13588 out.go:368] Setting JSON to true
	I1026 07:47:46.768341   13588 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1811,"bootTime":1761463056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 07:47:46.768487   13588 start.go:141] virtualization: kvm guest
	I1026 07:47:46.770123   13588 out.go:99] [download-only-666462] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 07:47:46.770240   13588 notify.go:220] Checking for updates...
	I1026 07:47:46.771286   13588 out.go:171] MINIKUBE_LOCATION=21772
	I1026 07:47:46.772424   13588 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 07:47:46.773542   13588 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21772-9405/kubeconfig
	I1026 07:47:46.774772   13588 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9405/.minikube
	I1026 07:47:46.776001   13588 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1026 07:47:46.777983   13588 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 07:47:46.778249   13588 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 07:47:46.812067   13588 out.go:99] Using the kvm2 driver based on user configuration
	I1026 07:47:46.812111   13588 start.go:305] selected driver: kvm2
	I1026 07:47:46.812117   13588 start.go:925] validating driver "kvm2" against <nil>
	I1026 07:47:46.812422   13588 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 07:47:46.812855   13588 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1026 07:47:46.813023   13588 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 07:47:46.813045   13588 cni.go:84] Creating CNI manager for ""
	I1026 07:47:46.813096   13588 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 07:47:46.813105   13588 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 07:47:46.813140   13588 start.go:349] cluster config:
	{Name:download-only-666462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-666462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 07:47:46.813223   13588 iso.go:125] acquiring lock: {Name:mk96f67d8329fb7692bdfa7d5182ebbf9e1ba018 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 07:47:46.814382   13588 out.go:99] Starting "download-only-666462" primary control-plane node in "download-only-666462" cluster
	I1026 07:47:46.814394   13588 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 07:47:46.906057   13588 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 07:47:46.906081   13588 cache.go:58] Caching tarball of preloaded images
	I1026 07:47:46.906281   13588 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 07:47:46.908187   13588 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1026 07:47:46.908208   13588 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1026 07:47:47.010813   13588 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1026 07:47:47.010878   13588 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21772-9405/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 07:47:56.595457   13588 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 07:47:56.595835   13588 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/download-only-666462/config.json ...
	I1026 07:47:56.595864   13588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/download-only-666462/config.json: {Name:mk0a0c38f4413f4642c12301e576b521d618bb0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:56.596008   13588 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 07:47:56.596175   13588 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21772-9405/.minikube/cache/linux/amd64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-666462 host does not exist
	  To start a cluster, run: "minikube start -p download-only-666462"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-666462
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1026 07:47:57.972775   13321 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-183743 --alsologtostderr --binary-mirror http://127.0.0.1:42285 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-183743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-183743
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (101.92s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-472610 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-472610 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m40.850433563s)
helpers_test.go:175: Cleaning up "offline-crio-472610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-472610
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-472610: (1.06520572s)
--- PASS: TestOffline (101.92s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-465751
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-465751: exit status 85 (65.423492ms)

                                                
                                                
-- stdout --
	* Profile "addons-465751" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-465751"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-465751
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-465751: exit status 85 (64.727575ms)

                                                
                                                
-- stdout --
	* Profile "addons-465751" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-465751"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (203.16s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-465751 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-465751 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m23.163279842s)
--- PASS: TestAddons/Setup (203.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-465751 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-465751 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-465751 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-465751 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ee16220a-f0b1-46cf-a6ce-6883375c22fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ee16220a-f0b1-46cf-a6ce-6883375c22fb] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004832351s
addons_test.go:694: (dbg) Run:  kubectl --context addons-465751 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-465751 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-465751 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.934854ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-6z556" [e3274c78-922e-4531-bf22-ada2d7ee76ba] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004582198s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-9pmnx" [de45aec4-aed2-4c08-a39d-e1f65e28899e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003973673s
addons_test.go:392: (dbg) Run:  kubectl --context addons-465751 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-465751 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-465751 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.539891348s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 ip
2025/10/26 07:51:59 [DEBUG] GET http://192.168.39.128:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.28s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.62s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.688984ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-465751
addons_test.go:332: (dbg) Run:  kubectl --context addons-465751 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-54x6r" [38ebc718-5c82-48ab-9c88-866b4144c69c] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004556813s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 7.4448ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-nlhsw" [36bbab07-ba45-458e-85a2-28fa2305a5ac] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004391243s
addons_test.go:463: (dbg) Run:  kubectl --context addons-465751 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1026 07:51:48.018325   13321 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1026 07:51:48.029020   13321 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1026 07:51:48.029054   13321 kapi.go:107] duration metric: took 10.750949ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 10.766442ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-465751 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-465751 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [84111607-ab39-42c3-9887-71fdf2a751c9] Pending
helpers_test.go:352: "task-pv-pod" [84111607-ab39-42c3-9887-71fdf2a751c9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [84111607-ab39-42c3-9887-71fdf2a751c9] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.004212567s
addons_test.go:572: (dbg) Run:  kubectl --context addons-465751 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-465751 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-465751 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-465751 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-465751 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-465751 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-465751 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [90f919ae-f1be-4e54-a573-fe447a24f86c] Pending
helpers_test.go:352: "task-pv-pod-restore" [90f919ae-f1be-4e54-a573-fe447a24f86c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [90f919ae-f1be-4e54-a573-fe447a24f86c] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004147816s
addons_test.go:614: (dbg) Run:  kubectl --context addons-465751 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-465751 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-465751 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-465751 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.945362217s)
--- PASS: TestAddons/parallel/CSI (61.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-465751 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-465751 --alsologtostderr -v=1: (1.005247023s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-72bw5" [5044b006-6a7e-4e37-8ccd-f2761a56e9c5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-72bw5" [5044b006-6a7e-4e37-8ccd-f2761a56e9c5] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-72bw5" [5044b006-6a7e-4e37-8ccd-f2761a56e9c5] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.007513274s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-465751 addons disable headlamp --alsologtostderr -v=1: (6.190156133s)
--- PASS: TestAddons/parallel/Headlamp (22.20s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-dnrwl" [442c0325-0c12-4b88-bc8f-938f2f8a5a74] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006074325s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.99s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-465751 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-465751 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-465751 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [021e62d3-51dc-46ee-a5db-9df080429b9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [021e62d3-51dc-46ee-a5db-9df080429b9c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [021e62d3-51dc-46ee-a5db-9df080429b9c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.003867836s
addons_test.go:967: (dbg) Run:  kubectl --context addons-465751 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 ssh "cat /opt/local-path-provisioner/pvc-331c72ac-cdbf-4634-9ec1-6085c75e794e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-465751 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-465751 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-465751 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.18399324s)
--- PASS: TestAddons/parallel/LocalPath (58.99s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.94s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-qph55" [d3f4da58-871d-4071-9b3d-e686cde31287] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005327529s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.94s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-4n79v" [2c331792-f33f-43e2-b045-3964ca9fedd9] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003008336s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-465751 addons disable yakd --alsologtostderr -v=1: (5.765435215s)
--- PASS: TestAddons/parallel/Yakd (11.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (85.83s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-465751
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-465751: (1m25.63638879s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-465751
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-465751
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-465751
--- PASS: TestAddons/StoppedEnableDisable (85.83s)

                                                
                                    
x
+
TestCertOptions (73.3s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-170756 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-170756 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m11.904372444s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-170756 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-170756 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-170756 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-170756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-170756
--- PASS: TestCertOptions (73.30s)

                                                
                                    
x
+
TestCertExpiration (301.67s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-385866 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-385866 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m16.188450269s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-385866 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-385866 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (43.351092217s)
helpers_test.go:175: Cleaning up "cert-expiration-385866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-385866
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-385866: (2.128200006s)
--- PASS: TestCertExpiration (301.67s)

                                                
                                    
x
+
TestForceSystemdFlag (74.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-844353 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-844353 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m13.100420489s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-844353 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-844353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-844353
--- PASS: TestForceSystemdFlag (74.08s)

                                                
                                    
x
+
TestForceSystemdEnv (59.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-552867 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-552867 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (58.784686789s)
helpers_test.go:175: Cleaning up "force-systemd-env-552867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-552867
--- PASS: TestForceSystemdEnv (59.65s)

                                                
                                    
x
+
TestErrorSpam/setup (37.45s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-076835 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-076835 --driver=kvm2  --container-runtime=crio
E1026 07:56:22.469543   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:56:22.476038   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:56:22.487511   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:56:22.509028   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:56:22.550569   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:56:22.632199   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:56:22.793853   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:56:23.115651   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:56:23.757847   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:56:25.039527   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:56:27.602421   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:56:32.723993   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:56:42.965580   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-076835 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-076835 --driver=kvm2  --container-runtime=crio: (37.452226267s)
--- PASS: TestErrorSpam/setup (37.45s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 status
--- PASS: TestErrorSpam/status (0.65s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (5.07s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 stop: (1.913979657s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 stop: (1.344980101s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-076835 --log_dir /tmp/nospam-076835 stop: (1.811409226s)
--- PASS: TestErrorSpam/stop (5.07s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21772-9405/.minikube/files/etc/test/nested/copy/13321/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-118718 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1026 07:57:03.447345   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:57:44.410395   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-118718 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (52.449847261s)
--- PASS: TestFunctional/serial/StartWithProxy (52.45s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.72s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1026 07:57:51.747904   13321 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-118718 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-118718 --alsologtostderr -v=8: (33.717893067s)
functional_test.go:678: soft start took 33.718590706s for "functional-118718" cluster.
I1026 07:58:25.466197   13321 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (33.72s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-118718 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-118718 cache add registry.k8s.io/pause:3.1: (1.120764736s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-118718 cache add registry.k8s.io/pause:3.3: (1.201880077s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-118718 cache add registry.k8s.io/pause:latest: (1.157864388s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-118718 /tmp/TestFunctionalserialCacheCmdcacheadd_local3885934784/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 cache add minikube-local-cache-test:functional-118718
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-118718 cache add minikube-local-cache-test:functional-118718: (1.76208235s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 cache delete minikube-local-cache-test:functional-118718
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-118718
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118718 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (176.630632ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 kubectl -- --context functional-118718 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-118718 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (289.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-118718 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1026 07:59:06.334819   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:01:22.469342   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:01:50.183234   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-118718 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m49.130909786s)
functional_test.go:776: restart took 4m49.131061161s for "functional-118718" cluster.
I1026 08:03:22.517140   13321 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (289.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-118718 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-118718 logs: (1.487450519s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 logs --file /tmp/TestFunctionalserialLogsFileCmd459199543/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-118718 logs --file /tmp/TestFunctionalserialLogsFileCmd459199543/001/logs.txt: (1.469978697s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-118718 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-118718
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-118718: exit status 115 (233.647239ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.158:31771 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-118718 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118718 config get cpus: exit status 14 (67.137513ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118718 config get cpus: exit status 14 (57.481739ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-118718 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-118718 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 20453: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-118718 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-118718 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (127.984846ms)

                                                
                                                
-- stdout --
	* [functional-118718] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-9405/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9405/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:03:40.374985   20114 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:03:40.375293   20114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:03:40.375305   20114 out.go:374] Setting ErrFile to fd 2...
	I1026 08:03:40.375315   20114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:03:40.375597   20114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
	I1026 08:03:40.376216   20114 out.go:368] Setting JSON to false
	I1026 08:03:40.377425   20114 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2764,"bootTime":1761463056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:03:40.377545   20114 start.go:141] virtualization: kvm guest
	I1026 08:03:40.379613   20114 out.go:179] * [functional-118718] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:03:40.381037   20114 notify.go:220] Checking for updates...
	I1026 08:03:40.381074   20114 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:03:40.382222   20114 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:03:40.383759   20114 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9405/kubeconfig
	I1026 08:03:40.385000   20114 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9405/.minikube
	I1026 08:03:40.386226   20114 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:03:40.387478   20114 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:03:40.389403   20114 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:03:40.390061   20114 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:03:40.429179   20114 out.go:179] * Using the kvm2 driver based on existing profile
	I1026 08:03:40.430379   20114 start.go:305] selected driver: kvm2
	I1026 08:03:40.430397   20114 start.go:925] validating driver "kvm2" against &{Name:functional-118718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-118718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:03:40.430541   20114 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:03:40.432443   20114 out.go:203] 
	W1026 08:03:40.433522   20114 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1026 08:03:40.434599   20114 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-118718 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-118718 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-118718 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (131.685315ms)

                                                
                                                
-- stdout --
	* [functional-118718] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-9405/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9405/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:03:40.627066   20170 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:03:40.627421   20170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:03:40.627436   20170 out.go:374] Setting ErrFile to fd 2...
	I1026 08:03:40.627442   20170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:03:40.627903   20170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
	I1026 08:03:40.628549   20170 out.go:368] Setting JSON to false
	I1026 08:03:40.629727   20170 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2765,"bootTime":1761463056,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:03:40.629864   20170 start.go:141] virtualization: kvm guest
	I1026 08:03:40.631707   20170 out.go:179] * [functional-118718] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1026 08:03:40.632915   20170 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:03:40.632915   20170 notify.go:220] Checking for updates...
	I1026 08:03:40.635192   20170 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:03:40.636425   20170 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9405/kubeconfig
	I1026 08:03:40.637541   20170 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9405/.minikube
	I1026 08:03:40.638653   20170 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:03:40.640308   20170 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:03:40.641809   20170 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:03:40.642242   20170 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:03:40.677341   20170 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1026 08:03:40.678423   20170 start.go:305] selected driver: kvm2
	I1026 08:03:40.678438   20170 start.go:925] validating driver "kvm2" against &{Name:functional-118718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-118718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:03:40.678540   20170 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:03:40.680557   20170 out.go:203] 
	W1026 08:03:40.681609   20170 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1026 08:03:40.682858   20170 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-118718 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-118718 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-dcf2c" [772c2bb8-8544-4592-b003-b88f18d52095] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-dcf2c" [772c2bb8-8544-4592-b003-b88f18d52095] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.005084523s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.158:30116
functional_test.go:1680: http://192.168.39.158:30116: success! body:
Request served by hello-node-connect-7d85dfc575-dcf2c

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.158:30116
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.42s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [098b048b-795d-4f06-ab4d-7f4b074e7ec3] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004100425s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-118718 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-118718 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-118718 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-118718 apply -f testdata/storage-provisioner/pod.yaml
I1026 08:03:36.260143   13321 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e398ecce-d0f4-44bc-aa20-314f4dd3c7a1] Pending
helpers_test.go:352: "sp-pod" [e398ecce-d0f4-44bc-aa20-314f4dd3c7a1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e398ecce-d0f4-44bc-aa20-314f4dd3c7a1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.0051035s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-118718 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-118718 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-118718 delete -f testdata/storage-provisioner/pod.yaml: (1.034110754s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-118718 apply -f testdata/storage-provisioner/pod.yaml
I1026 08:03:52.612348   13321 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [5d04cdb5-80ee-4598-aba2-60b234cbb30e] Pending
helpers_test.go:352: "sp-pod" [5d04cdb5-80ee-4598-aba2-60b234cbb30e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [5d04cdb5-80ee-4598-aba2-60b234cbb30e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004587989s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-118718 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh -n functional-118718 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 cp functional-118718:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd885832171/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh -n functional-118718 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh -n functional-118718 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-118718 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-clpwm" [7dcb369c-5add-49ab-978d-dde6003c265c] Pending
helpers_test.go:352: "mysql-5bb876957f-clpwm" [7dcb369c-5add-49ab-978d-dde6003c265c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-clpwm" [7dcb369c-5add-49ab-978d-dde6003c265c] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.006443409s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-118718 exec mysql-5bb876957f-clpwm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-118718 exec mysql-5bb876957f-clpwm -- mysql -ppassword -e "show databases;": exit status 1 (122.684708ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1026 08:04:14.455938   13321 retry.go:31] will retry after 896.93856ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-118718 exec mysql-5bb876957f-clpwm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-118718 exec mysql-5bb876957f-clpwm -- mysql -ppassword -e "show databases;": exit status 1 (120.586946ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1026 08:04:15.474836   13321 retry.go:31] will retry after 772.558795ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-118718 exec mysql-5bb876957f-clpwm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/13321/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "sudo cat /etc/test/nested/copy/13321/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/13321.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "sudo cat /etc/ssl/certs/13321.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/13321.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "sudo cat /usr/share/ca-certificates/13321.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/133212.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "sudo cat /etc/ssl/certs/133212.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/133212.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "sudo cat /usr/share/ca-certificates/133212.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-118718 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118718 ssh "sudo systemctl is-active docker": exit status 1 (242.022453ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118718 ssh "sudo systemctl is-active containerd": exit status 1 (261.547639ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-118718 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-118718 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-nrk7h" [6c0c0c95-a392-4398-a395-1d314834e9a2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-nrk7h" [6c0c0c95-a392-4398-a395-1d314834e9a2] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.012755186s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "229.673805ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "58.859401ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "232.684797ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.23208ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-118718 /tmp/TestFunctionalparallelMountCmdany-port1880774632/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761465812516809912" to /tmp/TestFunctionalparallelMountCmdany-port1880774632/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761465812516809912" to /tmp/TestFunctionalparallelMountCmdany-port1880774632/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761465812516809912" to /tmp/TestFunctionalparallelMountCmdany-port1880774632/001/test-1761465812516809912
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118718 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (145.074017ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 08:03:32.662158   13321 retry.go:31] will retry after 274.967314ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 26 08:03 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 26 08:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 26 08:03 test-1761465812516809912
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh cat /mount-9p/test-1761465812516809912
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-118718 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c76349fe-ff8f-49ef-af05-98224ebeec0c] Pending
helpers_test.go:352: "busybox-mount" [c76349fe-ff8f-49ef-af05-98224ebeec0c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c76349fe-ff8f-49ef-af05-98224ebeec0c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c76349fe-ff8f-49ef-af05-98224ebeec0c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.007510535s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-118718 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-118718 /tmp/TestFunctionalparallelMountCmdany-port1880774632/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-118718 /tmp/TestFunctionalparallelMountCmdspecific-port3830986858/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118718 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (215.112952ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 08:03:40.411766   13321 retry.go:31] will retry after 740.12682ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-118718 /tmp/TestFunctionalparallelMountCmdspecific-port3830986858/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118718 ssh "sudo umount -f /mount-9p": exit status 1 (195.837903ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-118718 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-118718 /tmp/TestFunctionalparallelMountCmdspecific-port3830986858/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 service list -o json
functional_test.go:1504: Took "350.512093ms" to run "out/minikube-linux-amd64 -p functional-118718 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.158:31417
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.158:31417
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-118718 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-118718
localhost/kicbase/echo-server:functional-118718
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-118718 image ls --format short --alsologtostderr:
I1026 08:03:52.197310   20792 out.go:360] Setting OutFile to fd 1 ...
I1026 08:03:52.197422   20792 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:03:52.197430   20792 out.go:374] Setting ErrFile to fd 2...
I1026 08:03:52.197434   20792 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:03:52.197638   20792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
I1026 08:03:52.198220   20792 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:03:52.198323   20792 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:03:52.200263   20792 ssh_runner.go:195] Run: systemctl --version
I1026 08:03:52.202428   20792 main.go:141] libmachine: domain functional-118718 has defined MAC address 52:54:00:b1:5a:1f in network mk-functional-118718
I1026 08:03:52.202841   20792 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b1:5a:1f", ip: ""} in network mk-functional-118718: {Iface:virbr1 ExpiryTime:2025-10-26 08:57:14 +0000 UTC Type:0 Mac:52:54:00:b1:5a:1f Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-118718 Clientid:01:52:54:00:b1:5a:1f}
I1026 08:03:52.202867   20792 main.go:141] libmachine: domain functional-118718 has defined IP address 192.168.39.158 and MAC address 52:54:00:b1:5a:1f in network mk-functional-118718
I1026 08:03:52.203101   20792 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/functional-118718/id_rsa Username:docker}
I1026 08:03:52.288322   20792 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image ls --format table --alsologtostderr
2025/10/26 08:04:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-118718 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/my-image                      │ functional-118718  │ 7a445eec75d6f │ 1.47MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ localhost/minikube-local-cache-test     │ functional-118718  │ d78460728098d │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-118718  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 657fdcd1c3659 │ 155MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-118718 image ls --format table --alsologtostderr:
I1026 08:04:00.968943   20914 out.go:360] Setting OutFile to fd 1 ...
I1026 08:04:00.969282   20914 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:04:00.969297   20914 out.go:374] Setting ErrFile to fd 2...
I1026 08:04:00.969304   20914 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:04:00.969611   20914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
I1026 08:04:00.970484   20914 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:04:00.970644   20914 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:04:00.973382   20914 ssh_runner.go:195] Run: systemctl --version
I1026 08:04:00.976147   20914 main.go:141] libmachine: domain functional-118718 has defined MAC address 52:54:00:b1:5a:1f in network mk-functional-118718
I1026 08:04:00.976611   20914 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b1:5a:1f", ip: ""} in network mk-functional-118718: {Iface:virbr1 ExpiryTime:2025-10-26 08:57:14 +0000 UTC Type:0 Mac:52:54:00:b1:5a:1f Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-118718 Clientid:01:52:54:00:b1:5a:1f}
I1026 08:04:00.976648   20914 main.go:141] libmachine: domain functional-118718 has defined IP address 192.168.39.158 and MAC address 52:54:00:b1:5a:1f in network mk-functional-118718
I1026 08:04:00.976835   20914 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/functional-118718/id_rsa Username:docker}
I1026 08:04:01.079569   20914 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image ls --format json --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-118718 image ls --format json --alsologtostderr: (1.386770249s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-118718 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8"],"repoTags":["docker.io/library/nginx:latest"],"size":"155467611"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155ba
a902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}
,{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"e95222f40ddc3b0a3d75c345d1d33195713beb5f188046781f24a7898538d6ce","repoDigests":["docker.io/library/ad0cb1e57a0ad0f9167d7647b796fa012641dd58b3f0e7cf60230d8b7bbd5bff-tmp@sha256:459
8cf68683c52721764c0bd4c3e9dee872174f96d9fdea9624704865bf8137c"],"repoTags":[],"size":"1466018"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e21
78d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo
-server:functional-118718"],"size":"4944818"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"d78460728098da2b2f9489f35a8216abca8c25bd06bd5edd4ecba48aafad990c","repoDigests":["localhost/minikube-local-cache-test@sha256:12f5c693f51a2870e2bc99ad2514a22f87d944f94d8bbee37fecdbdfad64f909"],"repoTags":["localhost/minikube-local-cache-test:functional-118718"],"size":"3330"},{"id":"7a445eec75d6f05a3ae132bd127d45e8483c275b8634700425162e37aafbee9c","repoDigests":["localhost/my-image@sha256:30941483fce41b3205ce31dd9497a8ba4d991592a1acff35a5705ffee7a029c3"],"repoTags":["localhost/my-image:functional-118718"],"size":"1468600"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/e
tcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf
0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-118718 image ls --format json --alsologtostderr:
I1026 08:03:59.579263   20888 out.go:360] Setting OutFile to fd 1 ...
I1026 08:03:59.579374   20888 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:03:59.579381   20888 out.go:374] Setting ErrFile to fd 2...
I1026 08:03:59.579388   20888 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:03:59.579616   20888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
I1026 08:03:59.580240   20888 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:03:59.580358   20888 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:03:59.582393   20888 ssh_runner.go:195] Run: systemctl --version
I1026 08:03:59.584745   20888 main.go:141] libmachine: domain functional-118718 has defined MAC address 52:54:00:b1:5a:1f in network mk-functional-118718
I1026 08:03:59.585253   20888 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b1:5a:1f", ip: ""} in network mk-functional-118718: {Iface:virbr1 ExpiryTime:2025-10-26 08:57:14 +0000 UTC Type:0 Mac:52:54:00:b1:5a:1f Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-118718 Clientid:01:52:54:00:b1:5a:1f}
I1026 08:03:59.585281   20888 main.go:141] libmachine: domain functional-118718 has defined IP address 192.168.39.158 and MAC address 52:54:00:b1:5a:1f in network mk-functional-118718
I1026 08:03:59.585425   20888 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/functional-118718/id_rsa Username:docker}
I1026 08:03:59.691117   20888 ssh_runner.go:195] Run: sudo crictl images --output json
I1026 08:04:00.850539   20888 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.159382958s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-118718 image ls --format yaml --alsologtostderr:
- id: 657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8
repoTags:
- docker.io/library/nginx:latest
size: "155467611"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-118718
size: "4944818"
- id: d78460728098da2b2f9489f35a8216abca8c25bd06bd5edd4ecba48aafad990c
repoDigests:
- localhost/minikube-local-cache-test@sha256:12f5c693f51a2870e2bc99ad2514a22f87d944f94d8bbee37fecdbdfad64f909
repoTags:
- localhost/minikube-local-cache-test:functional-118718
size: "3330"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-118718 image ls --format yaml --alsologtostderr:
I1026 08:03:52.442463   20802 out.go:360] Setting OutFile to fd 1 ...
I1026 08:03:52.442816   20802 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:03:52.442830   20802 out.go:374] Setting ErrFile to fd 2...
I1026 08:03:52.442844   20802 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:03:52.443203   20802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
I1026 08:03:52.444007   20802 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:03:52.444183   20802 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:03:52.446841   20802 ssh_runner.go:195] Run: systemctl --version
I1026 08:03:52.449545   20802 main.go:141] libmachine: domain functional-118718 has defined MAC address 52:54:00:b1:5a:1f in network mk-functional-118718
I1026 08:03:52.450033   20802 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b1:5a:1f", ip: ""} in network mk-functional-118718: {Iface:virbr1 ExpiryTime:2025-10-26 08:57:14 +0000 UTC Type:0 Mac:52:54:00:b1:5a:1f Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-118718 Clientid:01:52:54:00:b1:5a:1f}
I1026 08:03:52.450070   20802 main.go:141] libmachine: domain functional-118718 has defined IP address 192.168.39.158 and MAC address 52:54:00:b1:5a:1f in network mk-functional-118718
I1026 08:03:52.450313   20802 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/functional-118718/id_rsa Username:docker}
I1026 08:03:52.559731   20802 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118718 ssh pgrep buildkitd: exit status 1 (173.191456ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image build -t localhost/my-image:functional-118718 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-118718 image build -t localhost/my-image:functional-118718 testdata/build --alsologtostderr: (6.207898209s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-118718 image build -t localhost/my-image:functional-118718 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e95222f40dd
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-118718
--> 7a445eec75d
Successfully tagged localhost/my-image:functional-118718
7a445eec75d6f05a3ae132bd127d45e8483c275b8634700425162e37aafbee9c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-118718 image build -t localhost/my-image:functional-118718 testdata/build --alsologtostderr:
I1026 08:03:52.896149   20834 out.go:360] Setting OutFile to fd 1 ...
I1026 08:03:52.896443   20834 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:03:52.896454   20834 out.go:374] Setting ErrFile to fd 2...
I1026 08:03:52.896458   20834 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 08:03:52.896711   20834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
I1026 08:03:52.897317   20834 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:03:52.898123   20834 config.go:182] Loaded profile config "functional-118718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 08:03:52.900562   20834 ssh_runner.go:195] Run: systemctl --version
I1026 08:03:52.903284   20834 main.go:141] libmachine: domain functional-118718 has defined MAC address 52:54:00:b1:5a:1f in network mk-functional-118718
I1026 08:03:52.903718   20834 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b1:5a:1f", ip: ""} in network mk-functional-118718: {Iface:virbr1 ExpiryTime:2025-10-26 08:57:14 +0000 UTC Type:0 Mac:52:54:00:b1:5a:1f Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-118718 Clientid:01:52:54:00:b1:5a:1f}
I1026 08:03:52.903759   20834 main.go:141] libmachine: domain functional-118718 has defined IP address 192.168.39.158 and MAC address 52:54:00:b1:5a:1f in network mk-functional-118718
I1026 08:03:52.903914   20834 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/functional-118718/id_rsa Username:docker}
I1026 08:03:52.993744   20834 build_images.go:161] Building image from path: /tmp/build.1236152733.tar
I1026 08:03:52.993861   20834 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1026 08:03:53.011017   20834 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1236152733.tar
I1026 08:03:53.016892   20834 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1236152733.tar: stat -c "%s %y" /var/lib/minikube/build/build.1236152733.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1236152733.tar': No such file or directory
I1026 08:03:53.016934   20834 ssh_runner.go:362] scp /tmp/build.1236152733.tar --> /var/lib/minikube/build/build.1236152733.tar (3072 bytes)
I1026 08:03:53.059079   20834 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1236152733
I1026 08:03:53.074297   20834 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1236152733 -xf /var/lib/minikube/build/build.1236152733.tar
I1026 08:03:53.091358   20834 crio.go:315] Building image: /var/lib/minikube/build/build.1236152733
I1026 08:03:53.091461   20834 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-118718 /var/lib/minikube/build/build.1236152733 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1026 08:03:59.000869   20834 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-118718 /var/lib/minikube/build/build.1236152733 --cgroup-manager=cgroupfs: (5.909365031s)
I1026 08:03:59.000943   20834 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1236152733
I1026 08:03:59.019179   20834 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1236152733.tar
I1026 08:03:59.036402   20834 build_images.go:217] Built localhost/my-image:functional-118718 from /tmp/build.1236152733.tar
I1026 08:03:59.036448   20834 build_images.go:133] succeeded building to: functional-118718
I1026 08:03:59.036455   20834 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.752830199s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-118718
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-118718 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2068714580/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-118718 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2068714580/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-118718 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2068714580/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118718 ssh "findmnt -T" /mount1: exit status 1 (230.330219ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 08:03:42.202797   13321 retry.go:31] will retry after 703.108995ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-118718 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-118718 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2068714580/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-118718 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2068714580/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-118718 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2068714580/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image load --daemon kicbase/echo-server:functional-118718 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-118718 image load --daemon kicbase/echo-server:functional-118718 --alsologtostderr: (2.473248904s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image load --daemon kicbase/echo-server:functional-118718 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-118718
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image load --daemon kicbase/echo-server:functional-118718 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image save kicbase/echo-server:functional-118718 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image rm kicbase/echo-server:functional-118718 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-118718
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-118718 image save --daemon kicbase/echo-server:functional-118718 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-118718
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.67s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-118718
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-118718
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-118718
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1026 08:06:22.459991   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-594398 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m22.985513114s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (203.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-594398 kubectl -- rollout status deployment/busybox: (5.32722226s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-plfp5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-r9dzg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-xk29b -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-plfp5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-r9dzg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-xk29b -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-plfp5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-r9dzg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-xk29b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-plfp5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-plfp5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-r9dzg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-r9dzg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-xk29b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 kubectl -- exec busybox-7b57f96db7-xk29b -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 node add --alsologtostderr -v 5
E1026 08:08:29.910149   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:08:29.916530   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:08:29.927957   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:08:29.949406   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:08:29.990833   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:08:30.072371   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:08:30.233936   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:08:30.555417   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:08:31.197150   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:08:32.478827   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-594398 node add --alsologtostderr -v 5: (43.125023017s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (43.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-594398 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 status --output json --alsologtostderr -v 5
E1026 08:08:35.040753   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp testdata/cp-test.txt ha-594398:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3766108530/001/cp-test_ha-594398.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398:/home/docker/cp-test.txt ha-594398-m02:/home/docker/cp-test_ha-594398_ha-594398-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m02 "sudo cat /home/docker/cp-test_ha-594398_ha-594398-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398:/home/docker/cp-test.txt ha-594398-m03:/home/docker/cp-test_ha-594398_ha-594398-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m03 "sudo cat /home/docker/cp-test_ha-594398_ha-594398-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398:/home/docker/cp-test.txt ha-594398-m04:/home/docker/cp-test_ha-594398_ha-594398-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m04 "sudo cat /home/docker/cp-test_ha-594398_ha-594398-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp testdata/cp-test.txt ha-594398-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3766108530/001/cp-test_ha-594398-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398-m02:/home/docker/cp-test.txt ha-594398:/home/docker/cp-test_ha-594398-m02_ha-594398.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398 "sudo cat /home/docker/cp-test_ha-594398-m02_ha-594398.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398-m02:/home/docker/cp-test.txt ha-594398-m03:/home/docker/cp-test_ha-594398-m02_ha-594398-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m03 "sudo cat /home/docker/cp-test_ha-594398-m02_ha-594398-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398-m02:/home/docker/cp-test.txt ha-594398-m04:/home/docker/cp-test_ha-594398-m02_ha-594398-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m04 "sudo cat /home/docker/cp-test_ha-594398-m02_ha-594398-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp testdata/cp-test.txt ha-594398-m03:/home/docker/cp-test.txt
E1026 08:08:40.163192   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3766108530/001/cp-test_ha-594398-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398-m03:/home/docker/cp-test.txt ha-594398:/home/docker/cp-test_ha-594398-m03_ha-594398.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398 "sudo cat /home/docker/cp-test_ha-594398-m03_ha-594398.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398-m03:/home/docker/cp-test.txt ha-594398-m02:/home/docker/cp-test_ha-594398-m03_ha-594398-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m02 "sudo cat /home/docker/cp-test_ha-594398-m03_ha-594398-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398-m03:/home/docker/cp-test.txt ha-594398-m04:/home/docker/cp-test_ha-594398-m03_ha-594398-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m04 "sudo cat /home/docker/cp-test_ha-594398-m03_ha-594398-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp testdata/cp-test.txt ha-594398-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3766108530/001/cp-test_ha-594398-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398-m04:/home/docker/cp-test.txt ha-594398:/home/docker/cp-test_ha-594398-m04_ha-594398.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398 "sudo cat /home/docker/cp-test_ha-594398-m04_ha-594398.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398-m04:/home/docker/cp-test.txt ha-594398-m02:/home/docker/cp-test_ha-594398-m04_ha-594398-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m02 "sudo cat /home/docker/cp-test_ha-594398-m04_ha-594398-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 cp ha-594398-m04:/home/docker/cp-test.txt ha-594398-m03:/home/docker/cp-test_ha-594398-m04_ha-594398-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 ssh -n ha-594398-m03 "sudo cat /home/docker/cp-test_ha-594398-m04_ha-594398-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (83.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 node stop m02 --alsologtostderr -v 5
E1026 08:08:50.405124   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:09:10.886746   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:09:51.848376   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-594398 node stop m02 --alsologtostderr -v 5: (1m23.154278514s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-594398 status --alsologtostderr -v 5: exit status 7 (507.07839ms)

                                                
                                                
-- stdout --
	ha-594398
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-594398-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-594398-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-594398-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:10:08.154439   24021 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:10:08.154695   24021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:10:08.154703   24021 out.go:374] Setting ErrFile to fd 2...
	I1026 08:10:08.154707   24021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:10:08.154891   24021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
	I1026 08:10:08.155048   24021 out.go:368] Setting JSON to false
	I1026 08:10:08.155078   24021 mustload.go:65] Loading cluster: ha-594398
	I1026 08:10:08.155193   24021 notify.go:220] Checking for updates...
	I1026 08:10:08.155471   24021 config.go:182] Loaded profile config "ha-594398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:10:08.155486   24021 status.go:174] checking status of ha-594398 ...
	I1026 08:10:08.157526   24021 status.go:371] ha-594398 host status = "Running" (err=<nil>)
	I1026 08:10:08.157542   24021 host.go:66] Checking if "ha-594398" exists ...
	I1026 08:10:08.159930   24021 main.go:141] libmachine: domain ha-594398 has defined MAC address 52:54:00:76:d8:bc in network mk-ha-594398
	I1026 08:10:08.160341   24021 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:d8:bc", ip: ""} in network mk-ha-594398: {Iface:virbr1 ExpiryTime:2025-10-26 09:04:32 +0000 UTC Type:0 Mac:52:54:00:76:d8:bc Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-594398 Clientid:01:52:54:00:76:d8:bc}
	I1026 08:10:08.160370   24021 main.go:141] libmachine: domain ha-594398 has defined IP address 192.168.39.126 and MAC address 52:54:00:76:d8:bc in network mk-ha-594398
	I1026 08:10:08.160571   24021 host.go:66] Checking if "ha-594398" exists ...
	I1026 08:10:08.160803   24021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:10:08.163172   24021 main.go:141] libmachine: domain ha-594398 has defined MAC address 52:54:00:76:d8:bc in network mk-ha-594398
	I1026 08:10:08.163638   24021 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:d8:bc", ip: ""} in network mk-ha-594398: {Iface:virbr1 ExpiryTime:2025-10-26 09:04:32 +0000 UTC Type:0 Mac:52:54:00:76:d8:bc Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-594398 Clientid:01:52:54:00:76:d8:bc}
	I1026 08:10:08.163661   24021 main.go:141] libmachine: domain ha-594398 has defined IP address 192.168.39.126 and MAC address 52:54:00:76:d8:bc in network mk-ha-594398
	I1026 08:10:08.163818   24021 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/ha-594398/id_rsa Username:docker}
	I1026 08:10:08.252962   24021 ssh_runner.go:195] Run: systemctl --version
	I1026 08:10:08.260206   24021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:10:08.277253   24021 kubeconfig.go:125] found "ha-594398" server: "https://192.168.39.254:8443"
	I1026 08:10:08.277282   24021 api_server.go:166] Checking apiserver status ...
	I1026 08:10:08.277314   24021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:10:08.298814   24021 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1362/cgroup
	W1026 08:10:08.311623   24021 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1362/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:10:08.311690   24021 ssh_runner.go:195] Run: ls
	I1026 08:10:08.316780   24021 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1026 08:10:08.321770   24021 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1026 08:10:08.321793   24021 status.go:463] ha-594398 apiserver status = Running (err=<nil>)
	I1026 08:10:08.321803   24021 status.go:176] ha-594398 status: &{Name:ha-594398 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:10:08.321822   24021 status.go:174] checking status of ha-594398-m02 ...
	I1026 08:10:08.323687   24021 status.go:371] ha-594398-m02 host status = "Stopped" (err=<nil>)
	I1026 08:10:08.323708   24021 status.go:384] host is not running, skipping remaining checks
	I1026 08:10:08.323714   24021 status.go:176] ha-594398-m02 status: &{Name:ha-594398-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:10:08.323734   24021 status.go:174] checking status of ha-594398-m03 ...
	I1026 08:10:08.325253   24021 status.go:371] ha-594398-m03 host status = "Running" (err=<nil>)
	I1026 08:10:08.325273   24021 host.go:66] Checking if "ha-594398-m03" exists ...
	I1026 08:10:08.327714   24021 main.go:141] libmachine: domain ha-594398-m03 has defined MAC address 52:54:00:44:e2:fe in network mk-ha-594398
	I1026 08:10:08.328170   24021 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:44:e2:fe", ip: ""} in network mk-ha-594398: {Iface:virbr1 ExpiryTime:2025-10-26 09:06:31 +0000 UTC Type:0 Mac:52:54:00:44:e2:fe Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-594398-m03 Clientid:01:52:54:00:44:e2:fe}
	I1026 08:10:08.328193   24021 main.go:141] libmachine: domain ha-594398-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:44:e2:fe in network mk-ha-594398
	I1026 08:10:08.328333   24021 host.go:66] Checking if "ha-594398-m03" exists ...
	I1026 08:10:08.328528   24021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:10:08.330391   24021 main.go:141] libmachine: domain ha-594398-m03 has defined MAC address 52:54:00:44:e2:fe in network mk-ha-594398
	I1026 08:10:08.330725   24021 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:44:e2:fe", ip: ""} in network mk-ha-594398: {Iface:virbr1 ExpiryTime:2025-10-26 09:06:31 +0000 UTC Type:0 Mac:52:54:00:44:e2:fe Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-594398-m03 Clientid:01:52:54:00:44:e2:fe}
	I1026 08:10:08.330744   24021 main.go:141] libmachine: domain ha-594398-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:44:e2:fe in network mk-ha-594398
	I1026 08:10:08.330866   24021 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/ha-594398-m03/id_rsa Username:docker}
	I1026 08:10:08.414567   24021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:10:08.437932   24021 kubeconfig.go:125] found "ha-594398" server: "https://192.168.39.254:8443"
	I1026 08:10:08.437963   24021 api_server.go:166] Checking apiserver status ...
	I1026 08:10:08.438006   24021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:10:08.460290   24021 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1806/cgroup
	W1026 08:10:08.477332   24021 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1806/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:10:08.477384   24021 ssh_runner.go:195] Run: ls
	I1026 08:10:08.482282   24021 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1026 08:10:08.486900   24021 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1026 08:10:08.486923   24021 status.go:463] ha-594398-m03 apiserver status = Running (err=<nil>)
	I1026 08:10:08.486930   24021 status.go:176] ha-594398-m03 status: &{Name:ha-594398-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:10:08.486960   24021 status.go:174] checking status of ha-594398-m04 ...
	I1026 08:10:08.488500   24021 status.go:371] ha-594398-m04 host status = "Running" (err=<nil>)
	I1026 08:10:08.488516   24021 host.go:66] Checking if "ha-594398-m04" exists ...
	I1026 08:10:08.491258   24021 main.go:141] libmachine: domain ha-594398-m04 has defined MAC address 52:54:00:9a:3e:79 in network mk-ha-594398
	I1026 08:10:08.491698   24021 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:3e:79", ip: ""} in network mk-ha-594398: {Iface:virbr1 ExpiryTime:2025-10-26 09:08:05 +0000 UTC Type:0 Mac:52:54:00:9a:3e:79 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-594398-m04 Clientid:01:52:54:00:9a:3e:79}
	I1026 08:10:08.491725   24021 main.go:141] libmachine: domain ha-594398-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:9a:3e:79 in network mk-ha-594398
	I1026 08:10:08.491870   24021 host.go:66] Checking if "ha-594398-m04" exists ...
	I1026 08:10:08.492080   24021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:10:08.494318   24021 main.go:141] libmachine: domain ha-594398-m04 has defined MAC address 52:54:00:9a:3e:79 in network mk-ha-594398
	I1026 08:10:08.494666   24021 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9a:3e:79", ip: ""} in network mk-ha-594398: {Iface:virbr1 ExpiryTime:2025-10-26 09:08:05 +0000 UTC Type:0 Mac:52:54:00:9a:3e:79 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-594398-m04 Clientid:01:52:54:00:9a:3e:79}
	I1026 08:10:08.494693   24021 main.go:141] libmachine: domain ha-594398-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:9a:3e:79 in network mk-ha-594398
	I1026 08:10:08.494822   24021 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/ha-594398-m04/id_rsa Username:docker}
	I1026 08:10:08.583929   24021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:10:08.602548   24021 status.go:176] ha-594398-m04 status: &{Name:ha-594398-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (83.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-594398 node start m02 --alsologtostderr -v 5: (41.819125594s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (42.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (375.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 stop --alsologtostderr -v 5
E1026 08:11:13.770689   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:11:22.461521   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:12:45.544694   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:13:29.909626   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:13:57.612130   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-594398 stop --alsologtostderr -v 5: (4m11.38725932s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 start --wait true --alsologtostderr -v 5
E1026 08:16:22.460776   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-594398 start --wait true --alsologtostderr -v 5: (2m4.146411565s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (375.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-594398 node delete m03 --alsologtostderr -v 5: (17.412367905s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (231.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 stop --alsologtostderr -v 5
E1026 08:18:29.915164   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-594398 stop --alsologtostderr -v 5: (3m51.146787738s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-594398 status --alsologtostderr -v 5: exit status 7 (63.974912ms)

                                                
                                                
-- stdout --
	ha-594398
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-594398-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-594398-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:21:18.179985   27598 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:21:18.180245   27598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:21:18.180256   27598 out.go:374] Setting ErrFile to fd 2...
	I1026 08:21:18.180260   27598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:21:18.180502   27598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
	I1026 08:21:18.180718   27598 out.go:368] Setting JSON to false
	I1026 08:21:18.180756   27598 mustload.go:65] Loading cluster: ha-594398
	I1026 08:21:18.180802   27598 notify.go:220] Checking for updates...
	I1026 08:21:18.181203   27598 config.go:182] Loaded profile config "ha-594398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:21:18.181219   27598 status.go:174] checking status of ha-594398 ...
	I1026 08:21:18.183305   27598 status.go:371] ha-594398 host status = "Stopped" (err=<nil>)
	I1026 08:21:18.183320   27598 status.go:384] host is not running, skipping remaining checks
	I1026 08:21:18.183325   27598 status.go:176] ha-594398 status: &{Name:ha-594398 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:21:18.183348   27598 status.go:174] checking status of ha-594398-m02 ...
	I1026 08:21:18.184640   27598 status.go:371] ha-594398-m02 host status = "Stopped" (err=<nil>)
	I1026 08:21:18.184669   27598 status.go:384] host is not running, skipping remaining checks
	I1026 08:21:18.184673   27598 status.go:176] ha-594398-m02 status: &{Name:ha-594398-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:21:18.184694   27598 status.go:174] checking status of ha-594398-m04 ...
	I1026 08:21:18.185827   27598 status.go:371] ha-594398-m04 host status = "Stopped" (err=<nil>)
	I1026 08:21:18.185839   27598 status.go:384] host is not running, skipping remaining checks
	I1026 08:21:18.185842   27598 status.go:176] ha-594398-m04 status: &{Name:ha-594398-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (231.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (92.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1026 08:21:22.461867   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-594398 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m32.207854451s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (92.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 node add --control-plane --alsologtostderr -v 5
E1026 08:23:29.916573   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-594398 node add --control-plane --alsologtostderr -v 5: (1m24.545446133s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-594398 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (85.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-467610 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1026 08:24:52.974493   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-467610 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m19.951634972s)
--- PASS: TestJSONOutput/start/Command (79.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-467610 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-467610 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-467610 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-467610 --output=json --user=testUser: (6.833617806s)
--- PASS: TestJSONOutput/stop/Command (6.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-443312 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-443312 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (74.634804ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8edb678e-dcc9-49a1-bce8-e627bd972fc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-443312] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f57c391d-b108-4139-894d-702ea58e6881","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21772"}}
	{"specversion":"1.0","id":"c25c07cc-5dd0-4acc-9719-ec6d38913153","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a17f03b6-fb05-4b12-b24e-096c86ddb3ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21772-9405/kubeconfig"}}
	{"specversion":"1.0","id":"5569d1fb-99bb-4682-bef6-c524a21b89c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9405/.minikube"}}
	{"specversion":"1.0","id":"566f75c8-c437-4bd3-b54b-472a7ab43beb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3cd117ce-7bfc-43a7-a9f6-e698b75f7869","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b9df957f-aca5-4112-87f6-84caaee523e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-443312" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-443312
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (77.64s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-013466 --driver=kvm2  --container-runtime=crio
E1026 08:26:22.459802   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-013466 --driver=kvm2  --container-runtime=crio: (36.856700975s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-016785 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-016785 --driver=kvm2  --container-runtime=crio: (38.169703825s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-013466
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-016785
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-016785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-016785
helpers_test.go:175: Cleaning up "first-013466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-013466
--- PASS: TestMinikubeProfile (77.64s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-969933 --memory=3072 --mount-string /tmp/TestMountStartserial3423619637/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-969933 --memory=3072 --mount-string /tmp/TestMountStartserial3423619637/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.108222482s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-969933 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-969933 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-987053 --memory=3072 --mount-string /tmp/TestMountStartserial3423619637/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-987053 --memory=3072 --mount-string /tmp/TestMountStartserial3423619637/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.956501053s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-987053 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-987053 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-969933 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-987053 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-987053 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-987053
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-987053: (1.204699458s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.61s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-987053
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-987053: (17.614832275s)
--- PASS: TestMountStart/serial/RestartStopped (18.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-987053 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-987053 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (98.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033904 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1026 08:28:29.910441   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:29:25.546150   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-033904 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m37.944010661s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (98.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-033904 -- rollout status deployment/busybox: (4.594628876s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- exec busybox-7b57f96db7-7zhj7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- exec busybox-7b57f96db7-htlzp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- exec busybox-7b57f96db7-7zhj7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- exec busybox-7b57f96db7-htlzp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- exec busybox-7b57f96db7-7zhj7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- exec busybox-7b57f96db7-htlzp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- exec busybox-7b57f96db7-7zhj7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- exec busybox-7b57f96db7-7zhj7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- exec busybox-7b57f96db7-htlzp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033904 -- exec busybox-7b57f96db7-htlzp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (70.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-033904 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-033904 -v=5 --alsologtostderr: (1m9.981055267s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (70.41s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-033904 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 cp testdata/cp-test.txt multinode-033904:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 cp multinode-033904:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1777154572/001/cp-test_multinode-033904.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 cp multinode-033904:/home/docker/cp-test.txt multinode-033904-m02:/home/docker/cp-test_multinode-033904_multinode-033904-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904-m02 "sudo cat /home/docker/cp-test_multinode-033904_multinode-033904-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 cp multinode-033904:/home/docker/cp-test.txt multinode-033904-m03:/home/docker/cp-test_multinode-033904_multinode-033904-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904-m03 "sudo cat /home/docker/cp-test_multinode-033904_multinode-033904-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 cp testdata/cp-test.txt multinode-033904-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 cp multinode-033904-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1777154572/001/cp-test_multinode-033904-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 cp multinode-033904-m02:/home/docker/cp-test.txt multinode-033904:/home/docker/cp-test_multinode-033904-m02_multinode-033904.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904 "sudo cat /home/docker/cp-test_multinode-033904-m02_multinode-033904.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 cp multinode-033904-m02:/home/docker/cp-test.txt multinode-033904-m03:/home/docker/cp-test_multinode-033904-m02_multinode-033904-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904-m03 "sudo cat /home/docker/cp-test_multinode-033904-m02_multinode-033904-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 cp testdata/cp-test.txt multinode-033904-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 cp multinode-033904-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1777154572/001/cp-test_multinode-033904-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 cp multinode-033904-m03:/home/docker/cp-test.txt multinode-033904:/home/docker/cp-test_multinode-033904-m03_multinode-033904.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904 "sudo cat /home/docker/cp-test_multinode-033904-m03_multinode-033904.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 cp multinode-033904-m03:/home/docker/cp-test.txt multinode-033904-m02:/home/docker/cp-test_multinode-033904-m03_multinode-033904-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 ssh -n multinode-033904-m02 "sudo cat /home/docker/cp-test_multinode-033904-m03_multinode-033904-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-033904 node stop m03: (1.48916176s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-033904 status: exit status 7 (325.347561ms)

                                                
                                                
-- stdout --
	multinode-033904
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-033904-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-033904-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-033904 status --alsologtostderr: exit status 7 (318.357519ms)

                                                
                                                
-- stdout --
	multinode-033904
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-033904-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-033904-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:31:17.617722   33296 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:31:17.617954   33296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:31:17.617962   33296 out.go:374] Setting ErrFile to fd 2...
	I1026 08:31:17.617965   33296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:31:17.618180   33296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
	I1026 08:31:17.618336   33296 out.go:368] Setting JSON to false
	I1026 08:31:17.618365   33296 mustload.go:65] Loading cluster: multinode-033904
	I1026 08:31:17.618463   33296 notify.go:220] Checking for updates...
	I1026 08:31:17.618727   33296 config.go:182] Loaded profile config "multinode-033904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:17.618740   33296 status.go:174] checking status of multinode-033904 ...
	I1026 08:31:17.620680   33296 status.go:371] multinode-033904 host status = "Running" (err=<nil>)
	I1026 08:31:17.620700   33296 host.go:66] Checking if "multinode-033904" exists ...
	I1026 08:31:17.622976   33296 main.go:141] libmachine: domain multinode-033904 has defined MAC address 52:54:00:22:d6:ca in network mk-multinode-033904
	I1026 08:31:17.623365   33296 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:22:d6:ca", ip: ""} in network mk-multinode-033904: {Iface:virbr1 ExpiryTime:2025-10-26 09:28:28 +0000 UTC Type:0 Mac:52:54:00:22:d6:ca Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:multinode-033904 Clientid:01:52:54:00:22:d6:ca}
	I1026 08:31:17.623390   33296 main.go:141] libmachine: domain multinode-033904 has defined IP address 192.168.39.234 and MAC address 52:54:00:22:d6:ca in network mk-multinode-033904
	I1026 08:31:17.623493   33296 host.go:66] Checking if "multinode-033904" exists ...
	I1026 08:31:17.623686   33296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:31:17.625470   33296 main.go:141] libmachine: domain multinode-033904 has defined MAC address 52:54:00:22:d6:ca in network mk-multinode-033904
	I1026 08:31:17.625767   33296 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:22:d6:ca", ip: ""} in network mk-multinode-033904: {Iface:virbr1 ExpiryTime:2025-10-26 09:28:28 +0000 UTC Type:0 Mac:52:54:00:22:d6:ca Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:multinode-033904 Clientid:01:52:54:00:22:d6:ca}
	I1026 08:31:17.625789   33296 main.go:141] libmachine: domain multinode-033904 has defined IP address 192.168.39.234 and MAC address 52:54:00:22:d6:ca in network mk-multinode-033904
	I1026 08:31:17.625967   33296 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/multinode-033904/id_rsa Username:docker}
	I1026 08:31:17.706844   33296 ssh_runner.go:195] Run: systemctl --version
	I1026 08:31:17.713244   33296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:31:17.730043   33296 kubeconfig.go:125] found "multinode-033904" server: "https://192.168.39.234:8443"
	I1026 08:31:17.730094   33296 api_server.go:166] Checking apiserver status ...
	I1026 08:31:17.730139   33296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:31:17.750449   33296 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1334/cgroup
	W1026 08:31:17.762286   33296 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1334/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:31:17.762350   33296 ssh_runner.go:195] Run: ls
	I1026 08:31:17.767749   33296 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I1026 08:31:17.772076   33296 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I1026 08:31:17.772118   33296 status.go:463] multinode-033904 apiserver status = Running (err=<nil>)
	I1026 08:31:17.772128   33296 status.go:176] multinode-033904 status: &{Name:multinode-033904 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:31:17.772145   33296 status.go:174] checking status of multinode-033904-m02 ...
	I1026 08:31:17.773810   33296 status.go:371] multinode-033904-m02 host status = "Running" (err=<nil>)
	I1026 08:31:17.773827   33296 host.go:66] Checking if "multinode-033904-m02" exists ...
	I1026 08:31:17.776102   33296 main.go:141] libmachine: domain multinode-033904-m02 has defined MAC address 52:54:00:72:f1:b7 in network mk-multinode-033904
	I1026 08:31:17.776554   33296 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:72:f1:b7", ip: ""} in network mk-multinode-033904: {Iface:virbr1 ExpiryTime:2025-10-26 09:29:21 +0000 UTC Type:0 Mac:52:54:00:72:f1:b7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:multinode-033904-m02 Clientid:01:52:54:00:72:f1:b7}
	I1026 08:31:17.776581   33296 main.go:141] libmachine: domain multinode-033904-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:72:f1:b7 in network mk-multinode-033904
	I1026 08:31:17.776730   33296 host.go:66] Checking if "multinode-033904-m02" exists ...
	I1026 08:31:17.776991   33296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:31:17.779284   33296 main.go:141] libmachine: domain multinode-033904-m02 has defined MAC address 52:54:00:72:f1:b7 in network mk-multinode-033904
	I1026 08:31:17.779647   33296 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:72:f1:b7", ip: ""} in network mk-multinode-033904: {Iface:virbr1 ExpiryTime:2025-10-26 09:29:21 +0000 UTC Type:0 Mac:52:54:00:72:f1:b7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:multinode-033904-m02 Clientid:01:52:54:00:72:f1:b7}
	I1026 08:31:17.779668   33296 main.go:141] libmachine: domain multinode-033904-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:72:f1:b7 in network mk-multinode-033904
	I1026 08:31:17.779814   33296 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21772-9405/.minikube/machines/multinode-033904-m02/id_rsa Username:docker}
	I1026 08:31:17.863017   33296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:31:17.878510   33296 status.go:176] multinode-033904-m02 status: &{Name:multinode-033904-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:31:17.878542   33296 status.go:174] checking status of multinode-033904-m03 ...
	I1026 08:31:17.880265   33296 status.go:371] multinode-033904-m03 host status = "Stopped" (err=<nil>)
	I1026 08:31:17.880282   33296 status.go:384] host is not running, skipping remaining checks
	I1026 08:31:17.880289   33296 status.go:176] multinode-033904-m03 status: &{Name:multinode-033904-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 node start m03 -v=5 --alsologtostderr
E1026 08:31:22.461288   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-033904 node start m03 -v=5 --alsologtostderr: (39.133496291s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (302.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-033904
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-033904
E1026 08:33:29.916447   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-033904: (2m50.725222445s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033904 --wait=true -v=5 --alsologtostderr
E1026 08:36:22.459820   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-033904 --wait=true -v=5 --alsologtostderr: (2m11.429709353s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-033904
--- PASS: TestMultiNode/serial/RestartKeepsNodes (302.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-033904 node delete m03: (2.085697606s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (166.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 stop
E1026 08:38:29.910779   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-033904 stop: (2m46.200136623s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-033904 status: exit status 7 (61.284694ms)

                                                
                                                
-- stdout --
	multinode-033904
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-033904-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-033904 status --alsologtostderr: exit status 7 (60.246305ms)

                                                
                                                
-- stdout --
	multinode-033904
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-033904-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:39:48.650377   35674 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:39:48.650629   35674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:39:48.650640   35674 out.go:374] Setting ErrFile to fd 2...
	I1026 08:39:48.650648   35674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:39:48.650820   35674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
	I1026 08:39:48.650976   35674 out.go:368] Setting JSON to false
	I1026 08:39:48.651007   35674 mustload.go:65] Loading cluster: multinode-033904
	I1026 08:39:48.651058   35674 notify.go:220] Checking for updates...
	I1026 08:39:48.651505   35674 config.go:182] Loaded profile config "multinode-033904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:39:48.651542   35674 status.go:174] checking status of multinode-033904 ...
	I1026 08:39:48.653916   35674 status.go:371] multinode-033904 host status = "Stopped" (err=<nil>)
	I1026 08:39:48.653933   35674 status.go:384] host is not running, skipping remaining checks
	I1026 08:39:48.653939   35674 status.go:176] multinode-033904 status: &{Name:multinode-033904 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:39:48.653962   35674 status.go:174] checking status of multinode-033904-m02 ...
	I1026 08:39:48.655260   35674 status.go:371] multinode-033904-m02 host status = "Stopped" (err=<nil>)
	I1026 08:39:48.655275   35674 status.go:384] host is not running, skipping remaining checks
	I1026 08:39:48.655281   35674 status.go:176] multinode-033904-m02 status: &{Name:multinode-033904-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (166.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (84.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033904 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-033904 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m23.924441804s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033904 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (84.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-033904
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033904-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-033904-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (71.432011ms)

                                                
                                                
-- stdout --
	* [multinode-033904-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-9405/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9405/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-033904-m02' is duplicated with machine name 'multinode-033904-m02' in profile 'multinode-033904'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033904-m03 --driver=kvm2  --container-runtime=crio
E1026 08:41:22.461299   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:41:32.976645   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-033904-m03 --driver=kvm2  --container-runtime=crio: (39.113968536s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-033904
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-033904: exit status 80 (203.519546ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-033904 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-033904-m03 already exists in multinode-033904-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-033904-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.29s)

                                                
                                    
x
+
TestScheduledStopUnix (106.82s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-239689 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-239689 --memory=3072 --driver=kvm2  --container-runtime=crio: (35.240504265s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-239689 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-239689 -n scheduled-stop-239689
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-239689 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1026 08:45:03.876445   13321 retry.go:31] will retry after 148.657µs: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
I1026 08:45:03.877608   13321 retry.go:31] will retry after 77.869µs: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
I1026 08:45:03.878767   13321 retry.go:31] will retry after 319.397µs: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
I1026 08:45:03.879907   13321 retry.go:31] will retry after 441.47µs: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
I1026 08:45:03.881043   13321 retry.go:31] will retry after 390.829µs: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
I1026 08:45:03.882175   13321 retry.go:31] will retry after 598.747µs: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
I1026 08:45:03.883304   13321 retry.go:31] will retry after 1.109579ms: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
I1026 08:45:03.885492   13321 retry.go:31] will retry after 1.61341ms: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
I1026 08:45:03.887669   13321 retry.go:31] will retry after 2.833698ms: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
I1026 08:45:03.890859   13321 retry.go:31] will retry after 4.829477ms: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
I1026 08:45:03.896079   13321 retry.go:31] will retry after 7.233966ms: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
I1026 08:45:03.904341   13321 retry.go:31] will retry after 11.066061ms: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
I1026 08:45:03.915515   13321 retry.go:31] will retry after 16.333533ms: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
I1026 08:45:03.932765   13321 retry.go:31] will retry after 17.691292ms: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
I1026 08:45:03.951054   13321 retry.go:31] will retry after 41.394813ms: open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/scheduled-stop-239689/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-239689 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-239689 -n scheduled-stop-239689
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-239689
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-239689 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1026 08:46:05.550302   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-239689
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-239689: exit status 7 (58.438869ms)

                                                
                                                
-- stdout --
	scheduled-stop-239689
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-239689 -n scheduled-stop-239689
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-239689 -n scheduled-stop-239689: exit status 7 (58.344316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-239689" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-239689
--- PASS: TestScheduledStopUnix (106.82s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (117.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.418973048 start -p running-upgrade-582578 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1026 08:46:22.460517   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.418973048 start -p running-upgrade-582578 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m31.789921837s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-582578 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-582578 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (22.007920619s)
helpers_test.go:175: Cleaning up "running-upgrade-582578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-582578
--- PASS: TestRunningBinaryUpgrade (117.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (151.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-600874 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-600874 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (37.771751988s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-600874
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-600874: (1.897355565s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-600874 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-600874 status --format={{.Host}}: exit status 7 (70.786663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-600874 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-600874 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.02330422s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-600874 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-600874 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-600874 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (97.8928ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-600874] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-9405/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9405/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-600874
	    minikube start -p kubernetes-upgrade-600874 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6008742 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-600874 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-600874 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-600874 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.774496052s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-600874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-600874
--- PASS: TestKubernetesUpgrade (151.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-546120 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-546120 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (90.391917ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-546120] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-9405/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9405/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (76.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-546120 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-546120 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.172678055s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-546120 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (76.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-546120 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-546120 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (26.340733032s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-546120 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-546120 status -o json: exit status 2 (262.377945ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-546120","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-546120
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (23.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-546120 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-546120 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (23.901733811s)
--- PASS: TestNoKubernetes/serial/Start (23.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-236878 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-236878 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (835.182168ms)

                                                
                                                
-- stdout --
	* [false-236878] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-9405/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9405/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:48:01.369511   40503 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:48:01.369794   40503 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:48:01.369807   40503 out.go:374] Setting ErrFile to fd 2...
	I1026 08:48:01.369814   40503 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:48:01.370102   40503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9405/.minikube/bin
	I1026 08:48:01.370749   40503 out.go:368] Setting JSON to false
	I1026 08:48:01.371949   40503 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5425,"bootTime":1761463056,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:48:01.372101   40503 start.go:141] virtualization: kvm guest
	I1026 08:48:01.374172   40503 out.go:179] * [false-236878] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:48:01.375544   40503 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:48:01.375580   40503 notify.go:220] Checking for updates...
	I1026 08:48:01.377748   40503 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:48:01.379187   40503 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9405/kubeconfig
	I1026 08:48:01.380755   40503 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9405/.minikube
	I1026 08:48:01.381897   40503 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:48:01.383220   40503 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:48:01.385238   40503 config.go:182] Loaded profile config "NoKubernetes-546120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1026 08:48:01.385389   40503 config.go:182] Loaded profile config "kubernetes-upgrade-600874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:48:01.385505   40503 config.go:182] Loaded profile config "running-upgrade-582578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 08:48:01.385958   40503 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:48:02.121522   40503 out.go:179] * Using the kvm2 driver based on user configuration
	I1026 08:48:02.122672   40503 start.go:305] selected driver: kvm2
	I1026 08:48:02.122693   40503 start.go:925] validating driver "kvm2" against <nil>
	I1026 08:48:02.122709   40503 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:48:02.124841   40503 out.go:203] 
	W1026 08:48:02.125851   40503 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1026 08:48:02.126978   40503 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-236878 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-236878

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-236878

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-236878

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-236878

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-236878

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-236878

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-236878

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-236878

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-236878

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-236878

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-236878

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-236878" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-236878" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-9405/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:48:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.123:8443
name: kubernetes-upgrade-600874
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-9405/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:48:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.156:8443
name: running-upgrade-582578
contexts:
- context:
cluster: kubernetes-upgrade-600874
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:48:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-600874
name: kubernetes-upgrade-600874
- context:
cluster: running-upgrade-582578
user: running-upgrade-582578
name: running-upgrade-582578
current-context: running-upgrade-582578
kind: Config
users:
- name: kubernetes-upgrade-600874
user:
client-certificate: /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kubernetes-upgrade-600874/client.crt
client-key: /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kubernetes-upgrade-600874/client.key
- name: running-upgrade-582578
user:
client-certificate: /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/running-upgrade-582578/client.crt
client-key: /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/running-upgrade-582578/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-236878

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236878"

                                                
                                                
----------------------- debugLogs end: false-236878 [took: 3.954135387s] --------------------------------
helpers_test.go:175: Cleaning up "false-236878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-236878
--- PASS: TestNetworkPlugins/group/false (4.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-546120 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-546120 "sudo systemctl is-active --quiet service kubelet": exit status 1 (153.299784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-546120
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-546120: (1.198481512s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (56.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-546120 --driver=kvm2  --container-runtime=crio
E1026 08:48:29.910462   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-546120 --driver=kvm2  --container-runtime=crio: (56.509356957s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (56.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-546120 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-546120 "sudo systemctl is-active --quiet service kubelet": exit status 1 (178.09414ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestPause/serial/Start (88.1s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-907323 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-907323 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m28.104396778s)
--- PASS: TestPause/serial/Start (88.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (123.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2970784894 start -p stopped-upgrade-237845 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2970784894 start -p stopped-upgrade-237845 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m28.77039828s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2970784894 -p stopped-upgrade-237845 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2970784894 -p stopped-upgrade-237845 stop: (1.669449866s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-237845 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1026 08:51:22.460115   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-237845 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (33.363789837s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (123.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (91.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-236878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-236878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m31.908638394s)
--- PASS: TestNetworkPlugins/group/auto/Start (91.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.47s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-907323 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-907323 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.443654862s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.47s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-907323 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-907323 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-907323 --output=json --layout=cluster: exit status 2 (212.509062ms)

                                                
                                                
-- stdout --
	{"Name":"pause-907323","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-907323","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.21s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-907323 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.72s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-907323 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.89s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-907323 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-237845
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-237845: (1.084507833s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.6s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-236878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-236878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m1.405563577s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (93.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-236878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-236878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m33.2846796s)
--- PASS: TestNetworkPlugins/group/calico/Start (93.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-236878 "pgrep -a kubelet"
I1026 08:51:55.878603   13321 config.go:182] Loaded profile config "auto-236878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-236878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2wz4p" [06b983e8-4c8b-4253-8c37-86f9652001f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2wz4p" [06b983e8-4c8b-4253-8c37-86f9652001f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.005171416s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-236878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-236878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-236878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-236878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-236878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m13.190150934s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-hkzm2" [7c6a7aa9-6046-4444-ad24-8f630a098ac4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00476889s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-236878 "pgrep -a kubelet"
I1026 08:52:42.041018   13321 config.go:182] Loaded profile config "kindnet-236878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-236878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rh9rj" [92d11b6a-d18f-4ade-be9b-cf9bb072de32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rh9rj" [92d11b6a-d18f-4ade-be9b-cf9bb072de32] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.038976056s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-236878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-236878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-236878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-236878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-236878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m25.862615137s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-p7w6r" [0dbe27cf-4103-4731-a6ae-cf00a3b3c4d3] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-p7w6r" [0dbe27cf-4103-4731-a6ae-cf00a3b3c4d3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006146869s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-236878 "pgrep -a kubelet"
I1026 08:53:16.164408   13321 config.go:182] Loaded profile config "calico-236878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-236878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-n2869" [9a6a9b50-51c2-4ae6-b8dc-9c23721ba2ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-n2869" [9a6a9b50-51c2-4ae6-b8dc-9c23721ba2ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005323877s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-236878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-236878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-236878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-236878 "pgrep -a kubelet"
I1026 08:53:36.974898   13321 config.go:182] Loaded profile config "custom-flannel-236878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-236878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vlqqh" [529a0f8d-c384-49c8-8440-b2e40c201d21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vlqqh" [529a0f8d-c384-49c8-8440-b2e40c201d21] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005306064s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-236878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-236878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m8.403430063s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (109.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-236878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-236878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m49.60033326s)
--- PASS: TestNetworkPlugins/group/bridge/Start (109.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-236878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-236878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-236878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (118.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-257547 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-257547 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m58.905409671s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (118.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-236878 "pgrep -a kubelet"
I1026 08:54:35.100421   13321 config.go:182] Loaded profile config "enable-default-cni-236878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-236878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-236878 replace --force -f testdata/netcat-deployment.yaml: (1.066094253s)
I1026 08:54:36.194644   13321 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jgxzb" [4a9e98a7-059c-482e-b4a9-1686e85fe718] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jgxzb" [4a9e98a7-059c-482e-b4a9-1686e85fe718] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.002872517s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-236878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-236878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-236878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-hvp42" [0a8f06ea-c936-4a3c-ba48-5bfb6c4f4537] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003976369s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-236878 "pgrep -a kubelet"
I1026 08:55:00.788002   13321 config.go:182] Loaded profile config "flannel-236878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-236878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lcbzr" [90d1ea9c-b4af-4dc3-bd62-4651a265aca6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lcbzr" [90d1ea9c-b4af-4dc3-bd62-4651a265aca6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004591752s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (97.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-499565 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-499565 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m37.771699531s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (97.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-236878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-236878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-236878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (60.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-097734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-097734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m0.392493571s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (60.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-236878 "pgrep -a kubelet"
I1026 08:55:38.265520   13321 config.go:182] Loaded profile config "bridge-236878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-236878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t25c9" [f4bd6fcd-782f-4b74-80ef-fc7d335aa922] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t25c9" [f4bd6fcd-782f-4b74-80ef-fc7d335aa922] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005290406s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-236878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-236878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-236878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-257547 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [274f18fe-0213-4b98-83cb-853225f21a2c] Pending
helpers_test.go:352: "busybox" [274f18fe-0213-4b98-83cb-853225f21a2c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [274f18fe-0213-4b98-83cb-853225f21a2c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004763362s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-257547 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-072449 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-072449 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m23.994952472s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-257547 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-257547 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.383651296s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-257547 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (85.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-257547 --alsologtostderr -v=3
E1026 08:56:22.460080   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/addons-465751/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-257547 --alsologtostderr -v=3: (1m25.576323773s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (85.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-097734 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [61de3670-088f-48e5-b9a0-8885f5e72917] Pending
helpers_test.go:352: "busybox" [61de3670-088f-48e5-b9a0-8885f5e72917] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [61de3670-088f-48e5-b9a0-8885f5e72917] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.004170707s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-097734 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-097734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-097734 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (81.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-097734 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-097734 --alsologtostderr -v=3: (1m21.503267314s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (81.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-499565 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f3526283-736c-4e62-868f-e583b573512f] Pending
helpers_test.go:352: "busybox" [f3526283-736c-4e62-868f-e583b573512f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f3526283-736c-4e62-868f-e583b573512f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005104925s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-499565 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-499565 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-499565 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (89.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-499565 --alsologtostderr -v=3
E1026 08:56:56.102667   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:56:56.109055   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:56:56.120502   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:56:56.141985   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:56:56.183687   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:56:56.265207   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:56:56.426712   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:56:56.748420   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:56:57.390133   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:56:58.671874   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:01.233441   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:06.355596   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:16.597761   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-499565 --alsologtostderr -v=3: (1m29.423950604s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (89.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-072449 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [56fd04b3-19bd-49cf-941d-526473f68755] Pending
helpers_test.go:352: "busybox" [56fd04b3-19bd-49cf-941d-526473f68755] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [56fd04b3-19bd-49cf-941d-526473f68755] Running
E1026 08:57:35.860208   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:35.866606   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:35.877971   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:35.899353   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:35.940793   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:36.022244   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:36.184069   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:36.505875   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:37.079840   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:37.147895   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:38.429810   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:40.991217   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003856251s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-072449 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-072449 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-072449 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (86.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-072449 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-072449 --alsologtostderr -v=3: (1m26.097613967s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (86.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257547 -n old-k8s-version-257547
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257547 -n old-k8s-version-257547: exit status 7 (61.678366ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-257547 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (41.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-257547 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1026 08:57:46.113206   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:57:56.354780   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-257547 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (40.880670992s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257547 -n old-k8s-version-257547
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (41.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-097734 -n embed-certs-097734
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-097734 -n embed-certs-097734: exit status 7 (68.069486ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-097734 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-097734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1026 08:58:09.955433   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:09.961897   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:09.973264   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:09.994603   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:10.036119   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:10.117854   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:10.279118   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:10.601076   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:11.242989   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:12.524839   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:12.978568   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:15.087119   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:16.836999   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:18.041321   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:20.209282   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-097734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (43.690768278s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-097734 -n embed-certs-097734
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499565 -n no-preload-499565
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499565 -n no-preload-499565: exit status 7 (74.410305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-499565 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (60.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-499565 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-499565 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m0.013477282s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-499565 -n no-preload-499565
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (60.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-jt46t" [ff4debed-8ab1-4b29-9ead-e77b377d2cba] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1026 08:58:29.910258   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/functional-118718/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:30.450671   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-jt46t" [ff4debed-8ab1-4b29-9ead-e77b377d2cba] Running
E1026 08:58:37.264550   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:37.270971   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:37.282373   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:37.303847   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:37.345476   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:37.426979   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:37.588515   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.009050399s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-jt46t" [ff4debed-8ab1-4b29-9ead-e77b377d2cba] Running
E1026 08:58:37.910803   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:38.552621   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:39.834735   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:42.396897   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005057661s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-257547 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-257547 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-257547 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257547 -n old-k8s-version-257547
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257547 -n old-k8s-version-257547: exit status 2 (248.224971ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-257547 -n old-k8s-version-257547
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-257547 -n old-k8s-version-257547: exit status 2 (240.714793ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-257547 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257547 -n old-k8s-version-257547
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-257547 -n old-k8s-version-257547
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2fmt4" [012a3aaf-c2b2-4749-80ef-601295c49148] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2fmt4" [012a3aaf-c2b2-4749-80ef-601295c49148] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.003043877s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-010560 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1026 08:58:47.518817   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:50.932786   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-010560 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (46.397953103s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2fmt4" [012a3aaf-c2b2-4749-80ef-601295c49148] Running
E1026 08:58:57.761160   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:58:57.798673   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004525295s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-097734 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-097734 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-097734 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-097734 -n embed-certs-097734
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-097734 -n embed-certs-097734: exit status 2 (218.876857ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-097734 -n embed-certs-097734
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-097734 -n embed-certs-097734: exit status 2 (210.087141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-097734 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-097734 -n embed-certs-097734
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-097734 -n embed-certs-097734
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-072449 -n default-k8s-diff-port-072449
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-072449 -n default-k8s-diff-port-072449: exit status 7 (69.750016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-072449 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-072449 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1026 08:59:18.243336   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-072449 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (45.501640414s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-072449 -n default-k8s-diff-port-072449
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tgfsd" [cc8387b3-bb50-4f67-a2de-16ba3b22dc48] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tgfsd" [cc8387b3-bb50-4f67-a2de-16ba3b22dc48] Running
E1026 08:59:31.894254   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/calico-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.003636769s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-010560 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-010560 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.54548121s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tgfsd" [cc8387b3-bb50-4f67-a2de-16ba3b22dc48] Running
E1026 08:59:36.168449   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/enable-default-cni-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:36.174826   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/enable-default-cni-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:36.186258   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/enable-default-cni-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:36.208049   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/enable-default-cni-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004395385s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-499565 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-010560 --alsologtostderr -v=3
E1026 08:59:36.250102   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/enable-default-cni-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:36.332323   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/enable-default-cni-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:36.493933   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/enable-default-cni-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:36.815438   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/enable-default-cni-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:37.457157   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/enable-default-cni-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:38.738469   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/enable-default-cni-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:39.962792   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/auto-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-010560 --alsologtostderr -v=3: (7.927084664s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-499565 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-499565 --alsologtostderr -v=1
E1026 08:59:41.300649   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/enable-default-cni-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-499565 -n no-preload-499565
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-499565 -n no-preload-499565: exit status 2 (241.402309ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-499565 -n no-preload-499565
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-499565 -n no-preload-499565: exit status 2 (219.021363ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-499565 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-499565 -n no-preload-499565
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-499565 -n no-preload-499565
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-010560 -n newest-cni-010560
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-010560 -n newest-cni-010560: exit status 7 (81.951327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-010560 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-010560 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-010560 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (32.703393108s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-010560 -n newest-cni-010560
E1026 09:00:17.146813   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/enable-default-cni-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k95qh" [ea8b297c-01b9-41c0-ae8a-2eb54eea8487] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1026 08:59:54.588211   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:54.594616   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:54.606046   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:54.627569   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:54.669113   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:54.750640   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:54.912269   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:55.234075   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:55.875648   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k95qh" [ea8b297c-01b9-41c0-ae8a-2eb54eea8487] Running
E1026 08:59:56.665008   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/enable-default-cni-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:57.156941   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:59.204742   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/custom-flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:59:59.719270   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004456383s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k95qh" [ea8b297c-01b9-41c0-ae8a-2eb54eea8487] Running
E1026 09:00:04.840679   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/flannel-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00405007s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-072449 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-072449 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-072449 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-072449 --alsologtostderr -v=1: (1.036926128s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-072449 -n default-k8s-diff-port-072449
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-072449 -n default-k8s-diff-port-072449: exit status 2 (208.395254ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-072449 -n default-k8s-diff-port-072449
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-072449 -n default-k8s-diff-port-072449: exit status 2 (222.211996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-072449 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-072449 --alsologtostderr -v=1: (1.296623547s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-072449 -n default-k8s-diff-port-072449
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-072449 -n default-k8s-diff-port-072449
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-010560 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-010560 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-010560 -n newest-cni-010560
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-010560 -n newest-cni-010560: exit status 2 (209.655068ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-010560 -n newest-cni-010560
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-010560 -n newest-cni-010560: exit status 2 (212.465399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-010560 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-010560 -n newest-cni-010560
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-010560 -n newest-cni-010560
E1026 09:00:19.720856   13321 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kindnet-236878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.27s)

                                                
                                    

Test skip (40/329)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.28
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 4.26
268 TestNetworkPlugins/group/cilium 4.56
279 TestStartStop/group/disable-driver-mounts 0.21
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-465751 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-236878 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-236878

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-236878

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-236878

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-236878

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-236878

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-236878

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-236878

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-236878

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-236878

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-236878

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-236878

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-236878" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-236878" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-9405/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:47:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.123:8443
name: kubernetes-upgrade-600874
contexts:
- context:
cluster: kubernetes-upgrade-600874
user: kubernetes-upgrade-600874
name: kubernetes-upgrade-600874
current-context: kubernetes-upgrade-600874
kind: Config
users:
- name: kubernetes-upgrade-600874
user:
client-certificate: /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kubernetes-upgrade-600874/client.crt
client-key: /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kubernetes-upgrade-600874/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-236878

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236878"

                                                
                                                
----------------------- debugLogs end: kubenet-236878 [took: 4.070302897s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-236878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-236878
--- SKIP: TestNetworkPlugins/group/kubenet (4.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-236878 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-236878" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-9405/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:48:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.123:8443
name: kubernetes-upgrade-600874
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-9405/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:48:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.156:8443
name: running-upgrade-582578
contexts:
- context:
cluster: kubernetes-upgrade-600874
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:48:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-600874
name: kubernetes-upgrade-600874
- context:
cluster: running-upgrade-582578
user: running-upgrade-582578
name: running-upgrade-582578
current-context: running-upgrade-582578
kind: Config
users:
- name: kubernetes-upgrade-600874
user:
client-certificate: /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kubernetes-upgrade-600874/client.crt
client-key: /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/kubernetes-upgrade-600874/client.key
- name: running-upgrade-582578
user:
client-certificate: /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/running-upgrade-582578/client.crt
client-key: /home/jenkins/minikube-integration/21772-9405/.minikube/profiles/running-upgrade-582578/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-236878

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-236878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236878"

                                                
                                                
----------------------- debugLogs end: cilium-236878 [took: 4.352179734s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-236878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-236878
--- SKIP: TestNetworkPlugins/group/cilium (4.56s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-269831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-269831
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard