Test Report: KVM_Linux_crio 21796

                    
                      dade2a2e0f7c4c88a0aa5c1a92ad2c1084f27e44:2025-10-25:42053
                    
                

Test fail (3/323)

Order failed test Duration
37 TestAddons/parallel/Ingress 159.19
243 TestPreload 122.29
286 TestPause/serial/SecondStartNoReconfiguration 48.77
x
+
TestAddons/parallel/Ingress (159.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-631036 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-631036 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-631036 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2c09d0ba-4bcf-41ee-a6df-0ac2dfc801a8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2c09d0ba-4bcf-41ee-a6df-0ac2dfc801a8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005101395s
I1025 08:33:07.061666    9881 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-631036 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.440324385s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-631036 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.24
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-631036 -n addons-631036
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-631036 logs -n 25: (1.365130555s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-411797                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-411797 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-631240 --alsologtostderr --binary-mirror http://127.0.0.1:38089 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-631240 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-631240                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-631240 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ addons  │ enable dashboard -p addons-631036                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-631036                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ start   │ -p addons-631036 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-631036 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-631036 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ enable headlamp -p addons-631036 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-631036 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-631036 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-631036 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-631036 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
	│ ip      │ addons-631036 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-631036 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-631036 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
	│ ssh     │ addons-631036 ssh cat /opt/local-path-provisioner/pvc-28e1dc7b-1f5a-4207-a5b2-acbed43ab42a_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-631036 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │ 25 Oct 25 08:33 UTC │
	│ addons  │ addons-631036 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ 25 Oct 25 08:33 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-631036                                                                                                                                                                                                                                                                                                                                                                                         │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ 25 Oct 25 08:33 UTC │
	│ addons  │ addons-631036 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ 25 Oct 25 08:33 UTC │
	│ ssh     │ addons-631036 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-631036 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ 25 Oct 25 08:33 UTC │
	│ addons  │ addons-631036 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ 25 Oct 25 08:33 UTC │
	│ ip      │ addons-631036 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-631036        │ jenkins │ v1.37.0 │ 25 Oct 25 08:35 UTC │ 25 Oct 25 08:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:29:40
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:29:40.695721   10463 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:29:40.695943   10463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:40.695952   10463 out.go:374] Setting ErrFile to fd 2...
	I1025 08:29:40.695957   10463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:40.696135   10463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
	I1025 08:29:40.696688   10463 out.go:368] Setting JSON to false
	I1025 08:29:40.697500   10463 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":731,"bootTime":1761380250,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:29:40.697589   10463 start.go:141] virtualization: kvm guest
	I1025 08:29:40.699647   10463 out.go:179] * [addons-631036] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 08:29:40.701325   10463 notify.go:220] Checking for updates...
	I1025 08:29:40.701384   10463 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 08:29:40.702911   10463 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:29:40.704437   10463 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	I1025 08:29:40.705844   10463 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 08:29:40.707133   10463 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 08:29:40.708419   10463 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:29:40.710154   10463 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:29:40.741814   10463 out.go:179] * Using the kvm2 driver based on user configuration
	I1025 08:29:40.743181   10463 start.go:305] selected driver: kvm2
	I1025 08:29:40.743195   10463 start.go:925] validating driver "kvm2" against <nil>
	I1025 08:29:40.743207   10463 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:29:40.743872   10463 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:29:40.744123   10463 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:29:40.744149   10463 cni.go:84] Creating CNI manager for ""
	I1025 08:29:40.744192   10463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 08:29:40.744198   10463 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 08:29:40.744262   10463 start.go:349] cluster config:
	{Name:addons-631036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-631036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1025 08:29:40.744355   10463 iso.go:125] acquiring lock: {Name:mk56ae07ef3e2fe29ebca77d84768cf173c5b3d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 08:29:40.745794   10463 out.go:179] * Starting "addons-631036" primary control-plane node in "addons-631036" cluster
	I1025 08:29:40.747018   10463 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:29:40.747055   10463 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 08:29:40.747069   10463 cache.go:58] Caching tarball of preloaded images
	I1025 08:29:40.747192   10463 preload.go:233] Found /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 08:29:40.747203   10463 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 08:29:40.747510   10463 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/config.json ...
	I1025 08:29:40.747535   10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/config.json: {Name:mkcb1a921b1e0b0d5f4d452a0969ef27ecab2822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:29:40.747681   10463 start.go:360] acquireMachinesLock for addons-631036: {Name:mk307ae3583c207a47794987d4930662cf65d417 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 08:29:40.747725   10463 start.go:364] duration metric: took 30.63µs to acquireMachinesLock for "addons-631036"
	I1025 08:29:40.747742   10463 start.go:93] Provisioning new machine with config: &{Name:addons-631036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-631036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:29:40.747793   10463 start.go:125] createHost starting for "" (driver="kvm2")
	I1025 08:29:40.749273   10463 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1025 08:29:40.749427   10463 start.go:159] libmachine.API.Create for "addons-631036" (driver="kvm2")
	I1025 08:29:40.749455   10463 client.go:168] LocalClient.Create starting
	I1025 08:29:40.749532   10463 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem
	I1025 08:29:40.919015   10463 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem
	I1025 08:29:41.038521   10463 main.go:141] libmachine: creating domain...
	I1025 08:29:41.038541   10463 main.go:141] libmachine: creating network...
	I1025 08:29:41.039987   10463 main.go:141] libmachine: found existing default network
	I1025 08:29:41.040206   10463 main.go:141] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 08:29:41.040764   10463 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e4ae00}
	I1025 08:29:41.040853   10463 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-631036</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 08:29:41.046862   10463 main.go:141] libmachine: creating private network mk-addons-631036 192.168.39.0/24...
	I1025 08:29:41.118578   10463 main.go:141] libmachine: private network mk-addons-631036 192.168.39.0/24 created
	I1025 08:29:41.118855   10463 main.go:141] libmachine: <network>
	  <name>mk-addons-631036</name>
	  <uuid>235f9f39-a409-4b6e-a380-c479334ac67d</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:2b:63:73'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 08:29:41.118882   10463 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036 ...
	I1025 08:29:41.118904   10463 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21796-5973/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1025 08:29:41.118914   10463 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 08:29:41.119006   10463 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21796-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21796-5973/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1025 08:29:41.378586   10463 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa...
	I1025 08:29:41.897826   10463 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/addons-631036.rawdisk...
	I1025 08:29:41.897866   10463 main.go:141] libmachine: Writing magic tar header
	I1025 08:29:41.897890   10463 main.go:141] libmachine: Writing SSH key tar header
	I1025 08:29:41.897959   10463 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036 ...
	I1025 08:29:41.898023   10463 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036
	I1025 08:29:41.898045   10463 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036 (perms=drwx------)
	I1025 08:29:41.898057   10463 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973/.minikube/machines
	I1025 08:29:41.898067   10463 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973/.minikube/machines (perms=drwxr-xr-x)
	I1025 08:29:41.898078   10463 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 08:29:41.898089   10463 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973/.minikube (perms=drwxr-xr-x)
	I1025 08:29:41.898096   10463 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973
	I1025 08:29:41.898106   10463 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973 (perms=drwxrwxr-x)
	I1025 08:29:41.898116   10463 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1025 08:29:41.898126   10463 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1025 08:29:41.898136   10463 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1025 08:29:41.898146   10463 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1025 08:29:41.898154   10463 main.go:141] libmachine: checking permissions on dir: /home
	I1025 08:29:41.898163   10463 main.go:141] libmachine: skipping /home - not owner
	I1025 08:29:41.898167   10463 main.go:141] libmachine: defining domain...
	I1025 08:29:41.899557   10463 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-631036</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/addons-631036.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-631036'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1025 08:29:41.908133   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:68:81:5b in network default
	I1025 08:29:41.908782   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:41.908800   10463 main.go:141] libmachine: starting domain...
	I1025 08:29:41.908805   10463 main.go:141] libmachine: ensuring networks are active...
	I1025 08:29:41.909603   10463 main.go:141] libmachine: Ensuring network default is active
	I1025 08:29:41.909978   10463 main.go:141] libmachine: Ensuring network mk-addons-631036 is active
	I1025 08:29:41.910640   10463 main.go:141] libmachine: getting domain XML...
	I1025 08:29:41.911649   10463 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-631036</name>
	  <uuid>47cdcab0-e8ea-48b5-a70c-5c459d82a833</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/addons-631036.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:04:3b:0f'/>
	      <source network='mk-addons-631036'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:68:81:5b'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1025 08:29:43.241470   10463 main.go:141] libmachine: waiting for domain to start...
	I1025 08:29:43.243012   10463 main.go:141] libmachine: domain is now running
	I1025 08:29:43.243032   10463 main.go:141] libmachine: waiting for IP...
	I1025 08:29:43.243808   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:43.244361   10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
	I1025 08:29:43.244373   10463 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:29:43.244597   10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
	I1025 08:29:43.244635   10463 retry.go:31] will retry after 298.209668ms: waiting for domain to come up
	I1025 08:29:43.544077   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:43.544633   10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
	I1025 08:29:43.544648   10463 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:29:43.544930   10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
	I1025 08:29:43.544959   10463 retry.go:31] will retry after 253.047315ms: waiting for domain to come up
	I1025 08:29:43.799355   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:43.799862   10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
	I1025 08:29:43.799879   10463 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:29:43.800206   10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
	I1025 08:29:43.800257   10463 retry.go:31] will retry after 473.795837ms: waiting for domain to come up
	I1025 08:29:44.275904   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:44.276469   10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
	I1025 08:29:44.276486   10463 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:29:44.276791   10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
	I1025 08:29:44.276822   10463 retry.go:31] will retry after 408.756949ms: waiting for domain to come up
	I1025 08:29:44.687846   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:44.688811   10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
	I1025 08:29:44.688834   10463 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:29:44.689209   10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
	I1025 08:29:44.689269   10463 retry.go:31] will retry after 677.09377ms: waiting for domain to come up
	I1025 08:29:45.368460   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:45.369105   10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
	I1025 08:29:45.369128   10463 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:29:45.369530   10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
	I1025 08:29:45.369573   10463 retry.go:31] will retry after 930.349614ms: waiting for domain to come up
	I1025 08:29:46.301443   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:46.301973   10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
	I1025 08:29:46.301988   10463 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:29:46.302307   10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
	I1025 08:29:46.302349   10463 retry.go:31] will retry after 775.285338ms: waiting for domain to come up
	I1025 08:29:47.079525   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:47.080097   10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
	I1025 08:29:47.080115   10463 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:29:47.080461   10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
	I1025 08:29:47.080503   10463 retry.go:31] will retry after 1.000525447s: waiting for domain to come up
	I1025 08:29:48.082690   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:48.083250   10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
	I1025 08:29:48.083265   10463 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:29:48.083569   10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
	I1025 08:29:48.083600   10463 retry.go:31] will retry after 1.700888796s: waiting for domain to come up
	I1025 08:29:49.786627   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:49.787251   10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
	I1025 08:29:49.787266   10463 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:29:49.787557   10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
	I1025 08:29:49.787591   10463 retry.go:31] will retry after 2.032833179s: waiting for domain to come up
	I1025 08:29:51.822183   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:51.822872   10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
	I1025 08:29:51.822892   10463 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:29:51.823202   10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
	I1025 08:29:51.823271   10463 retry.go:31] will retry after 2.195452187s: waiting for domain to come up
	I1025 08:29:54.021606   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:54.022161   10463 main.go:141] libmachine: no network interface addresses found for domain addons-631036 (source=lease)
	I1025 08:29:54.022178   10463 main.go:141] libmachine: trying to list again with source=arp
	I1025 08:29:54.022494   10463 main.go:141] libmachine: unable to find current IP address of domain addons-631036 in network mk-addons-631036 (interfaces detected: [])
	I1025 08:29:54.022531   10463 retry.go:31] will retry after 3.490188088s: waiting for domain to come up
	I1025 08:29:57.515359   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:57.515926   10463 main.go:141] libmachine: domain addons-631036 has current primary IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:57.515944   10463 main.go:141] libmachine: found domain IP: 192.168.39.24
	I1025 08:29:57.515954   10463 main.go:141] libmachine: reserving static IP address...
	I1025 08:29:57.516319   10463 main.go:141] libmachine: unable to find host DHCP lease matching {name: "addons-631036", mac: "52:54:00:04:3b:0f", ip: "192.168.39.24"} in network mk-addons-631036
	I1025 08:29:57.702132   10463 main.go:141] libmachine: reserved static IP address 192.168.39.24 for domain addons-631036
	I1025 08:29:57.702160   10463 main.go:141] libmachine: waiting for SSH...
	I1025 08:29:57.702168   10463 main.go:141] libmachine: Getting to WaitForSSH function...
	I1025 08:29:57.705210   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:57.705651   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:minikube Clientid:01:52:54:00:04:3b:0f}
	I1025 08:29:57.705681   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:57.705911   10463 main.go:141] libmachine: Using SSH client type: native
	I1025 08:29:57.706183   10463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1025 08:29:57.706196   10463 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1025 08:29:57.819926   10463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 08:29:57.820338   10463 main.go:141] libmachine: domain creation complete
	I1025 08:29:57.821746   10463 machine.go:93] provisionDockerMachine start ...
	I1025 08:29:57.823846   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:57.824291   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:29:57.824316   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:57.824498   10463 main.go:141] libmachine: Using SSH client type: native
	I1025 08:29:57.824695   10463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1025 08:29:57.824706   10463 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 08:29:57.934463   10463 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 08:29:57.934491   10463 buildroot.go:166] provisioning hostname "addons-631036"
	I1025 08:29:57.937521   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:57.937965   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:29:57.937989   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:57.938201   10463 main.go:141] libmachine: Using SSH client type: native
	I1025 08:29:57.938451   10463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1025 08:29:57.938465   10463 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-631036 && echo "addons-631036" | sudo tee /etc/hostname
	I1025 08:29:58.069956   10463 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-631036
	
	I1025 08:29:58.073640   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:58.074053   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:29:58.074077   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:58.074311   10463 main.go:141] libmachine: Using SSH client type: native
	I1025 08:29:58.074503   10463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1025 08:29:58.074518   10463 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-631036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-631036/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-631036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 08:29:58.198346   10463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 08:29:58.198378   10463 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5973/.minikube}
	I1025 08:29:58.198419   10463 buildroot.go:174] setting up certificates
	I1025 08:29:58.198431   10463 provision.go:84] configureAuth start
	I1025 08:29:58.202025   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:58.202475   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:29:58.202497   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:58.205253   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:58.205718   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:29:58.205749   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:58.205887   10463 provision.go:143] copyHostCerts
	I1025 08:29:58.205965   10463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5973/.minikube/cert.pem (1123 bytes)
	I1025 08:29:58.206109   10463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5973/.minikube/key.pem (1679 bytes)
	I1025 08:29:58.206184   10463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5973/.minikube/ca.pem (1078 bytes)
	I1025 08:29:58.206268   10463 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca-key.pem org=jenkins.addons-631036 san=[127.0.0.1 192.168.39.24 addons-631036 localhost minikube]
	I1025 08:29:58.586608   10463 provision.go:177] copyRemoteCerts
	I1025 08:29:58.586665   10463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 08:29:58.589851   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:58.590373   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:29:58.590404   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:58.590575   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:29:58.678119   10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 08:29:58.711072   10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 08:29:58.743918   10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 08:29:58.776915   10463 provision.go:87] duration metric: took 578.472161ms to configureAuth
	I1025 08:29:58.776950   10463 buildroot.go:189] setting minikube options for container-runtime
	I1025 08:29:58.777162   10463 config.go:182] Loaded profile config "addons-631036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:29:58.780159   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:58.780592   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:29:58.780622   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:58.780791   10463 main.go:141] libmachine: Using SSH client type: native
	I1025 08:29:58.780987   10463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1025 08:29:58.781007   10463 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 08:29:59.026043   10463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 08:29:59.026078   10463 machine.go:96] duration metric: took 1.204314191s to provisionDockerMachine
	I1025 08:29:59.026094   10463 client.go:171] duration metric: took 18.276629357s to LocalClient.Create
	I1025 08:29:59.026110   10463 start.go:167] duration metric: took 18.276685143s to libmachine.API.Create "addons-631036"
	I1025 08:29:59.026118   10463 start.go:293] postStartSetup for "addons-631036" (driver="kvm2")
	I1025 08:29:59.026126   10463 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 08:29:59.026203   10463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 08:29:59.029181   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:59.029588   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:29:59.029611   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:59.029855   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:29:59.125200   10463 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 08:29:59.131432   10463 info.go:137] Remote host: Buildroot 2025.02
	I1025 08:29:59.131458   10463 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5973/.minikube/addons for local assets ...
	I1025 08:29:59.131538   10463 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5973/.minikube/files for local assets ...
	I1025 08:29:59.131562   10463 start.go:296] duration metric: took 105.439276ms for postStartSetup
	I1025 08:29:59.135073   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:59.135522   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:29:59.135545   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:59.135757   10463 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/config.json ...
	I1025 08:29:59.135951   10463 start.go:128] duration metric: took 18.388149122s to createHost
	I1025 08:29:59.138233   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:59.138610   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:29:59.138630   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:59.138811   10463 main.go:141] libmachine: Using SSH client type: native
	I1025 08:29:59.139041   10463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1025 08:29:59.139061   10463 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 08:29:59.259355   10463 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761380999.218833551
	
	I1025 08:29:59.259379   10463 fix.go:216] guest clock: 1761380999.218833551
	I1025 08:29:59.259386   10463 fix.go:229] Guest: 2025-10-25 08:29:59.218833551 +0000 UTC Remote: 2025-10-25 08:29:59.135961729 +0000 UTC m=+18.487636401 (delta=82.871822ms)
	I1025 08:29:59.259403   10463 fix.go:200] guest clock delta is within tolerance: 82.871822ms
	I1025 08:29:59.259408   10463 start.go:83] releasing machines lock for "addons-631036", held for 18.511673494s
	I1025 08:29:59.262606   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:59.263332   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:29:59.263362   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:59.264136   10463 ssh_runner.go:195] Run: cat /version.json
	I1025 08:29:59.264170   10463 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 08:29:59.267725   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:59.267731   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:59.268399   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:29:59.268465   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:59.268480   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:29:59.268520   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:29:59.268673   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:29:59.268852   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:29:59.375771   10463 ssh_runner.go:195] Run: systemctl --version
	I1025 08:29:59.382592   10463 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 08:29:59.544605   10463 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 08:29:59.552574   10463 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 08:29:59.552645   10463 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 08:29:59.574371   10463 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 08:29:59.574399   10463 start.go:495] detecting cgroup driver to use...
	I1025 08:29:59.574459   10463 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 08:29:59.593535   10463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 08:29:59.611360   10463 docker.go:218] disabling cri-docker service (if available) ...
	I1025 08:29:59.611426   10463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 08:29:59.629423   10463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 08:29:59.648459   10463 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 08:29:59.801057   10463 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 08:30:00.019587   10463 docker.go:234] disabling docker service ...
	I1025 08:30:00.019651   10463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 08:30:00.036703   10463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 08:30:00.053263   10463 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 08:30:00.214725   10463 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 08:30:00.364681   10463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 08:30:00.383391   10463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 08:30:00.407451   10463 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 08:30:00.407518   10463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:00.420966   10463 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 08:30:00.421039   10463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:00.434556   10463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:00.448048   10463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:00.461223   10463 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 08:30:00.475421   10463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:00.488726   10463 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:00.511110   10463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:00.524910   10463 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 08:30:00.536721   10463 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 08:30:00.536780   10463 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 08:30:00.558309   10463 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 08:30:00.570939   10463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:30:00.717621   10463 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 08:30:00.823698   10463 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 08:30:00.823805   10463 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 08:30:00.829358   10463 start.go:563] Will wait 60s for crictl version
	I1025 08:30:00.829443   10463 ssh_runner.go:195] Run: which crictl
	I1025 08:30:00.834045   10463 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 08:30:00.877407   10463 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 08:30:00.877552   10463 ssh_runner.go:195] Run: crio --version
	I1025 08:30:00.908377   10463 ssh_runner.go:195] Run: crio --version
	I1025 08:30:00.941918   10463 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1025 08:30:00.946319   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:00.946684   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:00.946705   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:00.946865   10463 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1025 08:30:00.951707   10463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:30:00.968351   10463 kubeadm.go:883] updating cluster {Name:addons-631036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-631036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 08:30:00.968463   10463 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:30:00.968508   10463 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:30:01.005353   10463 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1025 08:30:01.005434   10463 ssh_runner.go:195] Run: which lz4
	I1025 08:30:01.009951   10463 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 08:30:01.015169   10463 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 08:30:01.015225   10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1025 08:30:02.443791   10463 crio.go:462] duration metric: took 1.43389573s to copy over tarball
	I1025 08:30:02.443863   10463 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 08:30:04.372032   10463 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.92814397s)
	I1025 08:30:04.372060   10463 crio.go:469] duration metric: took 1.928239765s to extract the tarball
	I1025 08:30:04.372079   10463 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 08:30:04.416414   10463 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:30:04.466628   10463 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:30:04.466674   10463 cache_images.go:85] Images are preloaded, skipping loading
	I1025 08:30:04.466700   10463 kubeadm.go:934] updating node { 192.168.39.24 8443 v1.34.1 crio true true} ...
	I1025 08:30:04.466807   10463 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-631036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-631036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 08:30:04.466893   10463 ssh_runner.go:195] Run: crio config
	I1025 08:30:04.516979   10463 cni.go:84] Creating CNI manager for ""
	I1025 08:30:04.517014   10463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 08:30:04.517049   10463 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 08:30:04.517077   10463 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-631036 NodeName:addons-631036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 08:30:04.517230   10463 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-631036"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.24"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 08:30:04.517327   10463 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 08:30:04.531168   10463 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 08:30:04.531264   10463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 08:30:04.544394   10463 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1025 08:30:04.567612   10463 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 08:30:04.590674   10463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1025 08:30:04.612980   10463 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I1025 08:30:04.618090   10463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:30:04.635403   10463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:30:04.789182   10463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:30:04.811273   10463 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036 for IP: 192.168.39.24
	I1025 08:30:04.811300   10463 certs.go:195] generating shared ca certs ...
	I1025 08:30:04.811316   10463 certs.go:227] acquiring lock for ca certs: {Name:mke8d6ba2f98d813f76972dbfee9daa2e84822df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:04.811491   10463 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key
	I1025 08:30:05.439097   10463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt ...
	I1025 08:30:05.439129   10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt: {Name:mk52dd658a0757ce0a6c9d1937a34c5b33809a45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:05.439353   10463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key ...
	I1025 08:30:05.439372   10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key: {Name:mk0a25859ffa4cab5b8f6ed9286aa875514390e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:05.439493   10463 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key
	I1025 08:30:05.794211   10463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.crt ...
	I1025 08:30:05.794246   10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.crt: {Name:mk4aced4c5d58ad8d817891f3164a9d5ecefafb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:05.805373   10463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key ...
	I1025 08:30:05.805410   10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key: {Name:mke338d3cdba9c659c3b1df69ce22afb83bc9a5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:05.805555   10463 certs.go:257] generating profile certs ...
	I1025 08:30:05.805641   10463 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.key
	I1025 08:30:05.805659   10463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt with IP's: []
	I1025 08:30:06.154111   10463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt ...
	I1025 08:30:06.154144   10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: {Name:mk5e914da6617d5db487eb5d64f7c4a06b0d240b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:06.154353   10463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.key ...
	I1025 08:30:06.154368   10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.key: {Name:mk02e1cd0d843764aa57671aa8a4f96ad3514f45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:06.154472   10463 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.key.88f6c32f
	I1025 08:30:06.154501   10463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.crt.88f6c32f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.24]
	I1025 08:30:06.507164   10463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.crt.88f6c32f ...
	I1025 08:30:06.507196   10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.crt.88f6c32f: {Name:mk96fe0a39fd076cc1ea279f8e3c11c3f7d8b3f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:06.507429   10463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.key.88f6c32f ...
	I1025 08:30:06.507449   10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.key.88f6c32f: {Name:mk061a2652044a4cb0a34c624c008b6e699d6b2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:06.507554   10463 certs.go:382] copying /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.crt.88f6c32f -> /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.crt
	I1025 08:30:06.507627   10463 certs.go:386] copying /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.key.88f6c32f -> /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.key
	I1025 08:30:06.507674   10463 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.key
	I1025 08:30:06.507691   10463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.crt with IP's: []
	I1025 08:30:06.545705   10463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.crt ...
	I1025 08:30:06.545743   10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.crt: {Name:mk147b81e8c7960c73ea5a0ac7ebe9763e43565b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:06.545963   10463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.key ...
	I1025 08:30:06.545983   10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.key: {Name:mkf10dc08877c1895350813fc7155a602105261f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:06.546201   10463 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 08:30:06.546261   10463 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem (1078 bytes)
	I1025 08:30:06.546285   10463 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem (1123 bytes)
	I1025 08:30:06.546326   10463 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/key.pem (1679 bytes)
	I1025 08:30:06.546907   10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 08:30:06.585153   10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 08:30:06.617185   10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 08:30:06.651261   10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1025 08:30:06.684039   10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 08:30:06.717347   10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 08:30:06.751188   10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 08:30:06.782843   10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 08:30:06.815805   10463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 08:30:06.849340   10463 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 08:30:06.873749   10463 ssh_runner.go:195] Run: openssl version
	I1025 08:30:06.880802   10463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 08:30:06.894071   10463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:06.899610   10463 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:06.899677   10463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:06.907576   10463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 08:30:06.922175   10463 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 08:30:06.927613   10463 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 08:30:06.927679   10463 kubeadm.go:400] StartCluster: {Name:addons-631036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-631036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:30:06.927757   10463 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:30:06.927825   10463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:30:06.969004   10463 cri.go:89] found id: ""
	I1025 08:30:06.969094   10463 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 08:30:06.981215   10463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 08:30:06.994112   10463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 08:30:07.007901   10463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 08:30:07.007929   10463 kubeadm.go:157] found existing configuration files:
	
	I1025 08:30:07.008006   10463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 08:30:07.019502   10463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 08:30:07.019575   10463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 08:30:07.031854   10463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 08:30:07.043496   10463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 08:30:07.043563   10463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 08:30:07.055843   10463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 08:30:07.069028   10463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 08:30:07.069113   10463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 08:30:07.081969   10463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 08:30:07.093875   10463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 08:30:07.093942   10463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 08:30:07.106573   10463 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 08:30:07.163256   10463 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 08:30:07.163337   10463 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 08:30:07.283107   10463 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 08:30:07.283324   10463 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 08:30:07.283542   10463 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 08:30:07.295180   10463 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 08:30:07.392098   10463 out.go:252]   - Generating certificates and keys ...
	I1025 08:30:07.392269   10463 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 08:30:07.392371   10463 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 08:30:08.035875   10463 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 08:30:08.143153   10463 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 08:30:08.372659   10463 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 08:30:08.546768   10463 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 08:30:08.694288   10463 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 08:30:08.694417   10463 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-631036 localhost] and IPs [192.168.39.24 127.0.0.1 ::1]
	I1025 08:30:08.798842   10463 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 08:30:08.798981   10463 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-631036 localhost] and IPs [192.168.39.24 127.0.0.1 ::1]
	I1025 08:30:08.962675   10463 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 08:30:09.241830   10463 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 08:30:09.301300   10463 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 08:30:09.301394   10463 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 08:30:09.709210   10463 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 08:30:09.775060   10463 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 08:30:10.049167   10463 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 08:30:10.397152   10463 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 08:30:10.715594   10463 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 08:30:10.716110   10463 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 08:30:10.718440   10463 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 08:30:10.720426   10463 out.go:252]   - Booting up control plane ...
	I1025 08:30:10.720550   10463 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 08:30:10.720658   10463 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 08:30:10.722464   10463 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 08:30:10.742580   10463 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 08:30:10.742759   10463 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 08:30:10.749658   10463 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 08:30:10.749889   10463 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 08:30:10.749955   10463 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 08:30:10.916473   10463 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 08:30:10.916638   10463 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 08:30:11.425813   10463 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 507.941174ms
	I1025 08:30:11.426127   10463 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 08:30:11.426341   10463 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.24:8443/livez
	I1025 08:30:11.426468   10463 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 08:30:11.426584   10463 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 08:30:13.581523   10463 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.156715375s
	I1025 08:30:15.404321   10463 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.981763602s
	I1025 08:30:17.423779   10463 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002618972s
	I1025 08:30:17.446079   10463 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 08:30:17.466375   10463 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 08:30:17.478713   10463 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 08:30:17.479013   10463 kubeadm.go:318] [mark-control-plane] Marking the node addons-631036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 08:30:17.496957   10463 kubeadm.go:318] [bootstrap-token] Using token: oukl03.1aed6xmxtahaalv2
	I1025 08:30:17.498387   10463 out.go:252]   - Configuring RBAC rules ...
	I1025 08:30:17.498527   10463 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 08:30:17.507604   10463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 08:30:17.516788   10463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 08:30:17.520354   10463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 08:30:17.523820   10463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 08:30:17.531152   10463 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 08:30:17.835844   10463 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 08:30:18.294401   10463 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 08:30:18.830499   10463 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 08:30:18.831613   10463 kubeadm.go:318] 
	I1025 08:30:18.831685   10463 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 08:30:18.831693   10463 kubeadm.go:318] 
	I1025 08:30:18.831756   10463 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 08:30:18.831788   10463 kubeadm.go:318] 
	I1025 08:30:18.831839   10463 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 08:30:18.831915   10463 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 08:30:18.831989   10463 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 08:30:18.832022   10463 kubeadm.go:318] 
	I1025 08:30:18.832104   10463 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 08:30:18.832114   10463 kubeadm.go:318] 
	I1025 08:30:18.832176   10463 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 08:30:18.832188   10463 kubeadm.go:318] 
	I1025 08:30:18.832298   10463 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 08:30:18.832400   10463 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 08:30:18.832492   10463 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 08:30:18.832504   10463 kubeadm.go:318] 
	I1025 08:30:18.832634   10463 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 08:30:18.832748   10463 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 08:30:18.832757   10463 kubeadm.go:318] 
	I1025 08:30:18.832872   10463 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token oukl03.1aed6xmxtahaalv2 \
	I1025 08:30:18.833044   10463 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:fe6caeb5ca9f886e925578a66a55439fd94175d5983e2e751a2d3d56b0fd904d \
	I1025 08:30:18.833078   10463 kubeadm.go:318] 	--control-plane 
	I1025 08:30:18.833083   10463 kubeadm.go:318] 
	I1025 08:30:18.833208   10463 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 08:30:18.833218   10463 kubeadm.go:318] 
	I1025 08:30:18.833361   10463 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token oukl03.1aed6xmxtahaalv2 \
	I1025 08:30:18.833516   10463 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:fe6caeb5ca9f886e925578a66a55439fd94175d5983e2e751a2d3d56b0fd904d 
	I1025 08:30:18.835156   10463 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 08:30:18.835189   10463 cni.go:84] Creating CNI manager for ""
	I1025 08:30:18.835201   10463 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 08:30:18.838073   10463 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 08:30:18.839635   10463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 08:30:18.858618   10463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 08:30:18.883100   10463 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 08:30:18.883191   10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:18.883191   10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-631036 minikube.k8s.io/updated_at=2025_10_25T08_30_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=addons-631036 minikube.k8s.io/primary=true
	I1025 08:30:18.927072   10463 ops.go:34] apiserver oom_adj: -16
	I1025 08:30:19.041058   10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:19.542037   10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:20.042005   10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:20.542232   10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:21.041403   10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:21.541907   10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:22.041493   10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:22.541525   10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:23.041806   10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:23.541156   10463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:23.716964   10463 kubeadm.go:1113] duration metric: took 4.83385667s to wait for elevateKubeSystemPrivileges
	I1025 08:30:23.717012   10463 kubeadm.go:402] duration metric: took 16.789327545s to StartCluster
	I1025 08:30:23.717031   10463 settings.go:142] acquiring lock: {Name:mkceaa31f1735308eeec0f271d1ae2367ed96dc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:23.717175   10463 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-5973/kubeconfig
	I1025 08:30:23.717858   10463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/kubeconfig: {Name:mk7395a01001bce28a4f8d18a1c883ac67624078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:23.718127   10463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 08:30:23.718124   10463 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:30:23.718144   10463 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 08:30:23.718359   10463 addons.go:69] Setting yakd=true in profile "addons-631036"
	I1025 08:30:23.718382   10463 addons.go:238] Setting addon yakd=true in "addons-631036"
	I1025 08:30:23.718401   10463 config.go:182] Loaded profile config "addons-631036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:30:23.718418   10463 addons.go:69] Setting volcano=true in profile "addons-631036"
	I1025 08:30:23.718715   10463 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-631036"
	I1025 08:30:23.718774   10463 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-631036"
	I1025 08:30:23.718781   10463 addons.go:238] Setting addon volcano=true in "addons-631036"
	I1025 08:30:23.718819   10463 addons.go:69] Setting storage-provisioner=true in profile "addons-631036"
	I1025 08:30:23.718833   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.718853   10463 addons.go:238] Setting addon storage-provisioner=true in "addons-631036"
	I1025 08:30:23.718879   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.718974   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.718997   10463 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-631036"
	I1025 08:30:23.719041   10463 addons.go:69] Setting registry=true in profile "addons-631036"
	I1025 08:30:23.719049   10463 addons.go:69] Setting ingress=true in profile "addons-631036"
	I1025 08:30:23.719060   10463 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-631036"
	I1025 08:30:23.719065   10463 addons.go:238] Setting addon ingress=true in "addons-631036"
	I1025 08:30:23.719078   10463 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-631036"
	I1025 08:30:23.719090   10463 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-631036"
	I1025 08:30:23.719099   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.719111   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.719153   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.719169   10463 addons.go:69] Setting inspektor-gadget=true in profile "addons-631036"
	I1025 08:30:23.719183   10463 addons.go:238] Setting addon inspektor-gadget=true in "addons-631036"
	I1025 08:30:23.719206   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.720062   10463 addons.go:69] Setting volumesnapshots=true in profile "addons-631036"
	I1025 08:30:23.720087   10463 addons.go:238] Setting addon volumesnapshots=true in "addons-631036"
	I1025 08:30:23.720112   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.720758   10463 addons.go:69] Setting metrics-server=true in profile "addons-631036"
	I1025 08:30:23.720781   10463 addons.go:238] Setting addon metrics-server=true in "addons-631036"
	I1025 08:30:23.720805   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.721108   10463 addons.go:69] Setting ingress-dns=true in profile "addons-631036"
	I1025 08:30:23.721134   10463 addons.go:238] Setting addon ingress-dns=true in "addons-631036"
	I1025 08:30:23.721172   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.721636   10463 addons.go:69] Setting registry-creds=true in profile "addons-631036"
	I1025 08:30:23.721658   10463 addons.go:238] Setting addon registry-creds=true in "addons-631036"
	I1025 08:30:23.719052   10463 addons.go:238] Setting addon registry=true in "addons-631036"
	I1025 08:30:23.721679   10463 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-631036"
	I1025 08:30:23.721690   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.721702   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.721741   10463 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-631036"
	I1025 08:30:23.721770   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.721891   10463 addons.go:69] Setting default-storageclass=true in profile "addons-631036"
	I1025 08:30:23.721928   10463 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-631036"
	I1025 08:30:23.722395   10463 addons.go:69] Setting gcp-auth=true in profile "addons-631036"
	I1025 08:30:23.722421   10463 mustload.go:65] Loading cluster: addons-631036
	I1025 08:30:23.722707   10463 addons.go:69] Setting cloud-spanner=true in profile "addons-631036"
	I1025 08:30:23.722729   10463 addons.go:238] Setting addon cloud-spanner=true in "addons-631036"
	I1025 08:30:23.722753   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.722788   10463 out.go:179] * Verifying Kubernetes components...
	I1025 08:30:23.722704   10463 config.go:182] Loaded profile config "addons-631036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:30:23.725104   10463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:30:23.727473   10463 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 08:30:23.727500   10463 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 08:30:23.727489   10463 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1025 08:30:23.727542   10463 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	W1025 08:30:23.729080   10463 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 08:30:23.729213   10463 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 08:30:23.729494   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 08:30:23.729835   10463 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 08:30:23.729838   10463 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 08:30:23.729858   10463 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 08:30:23.730397   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 08:30:23.729898   10463 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 08:30:23.730573   10463 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 08:30:23.730806   10463 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:30:23.731433   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.731650   10463 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1025 08:30:23.731772   10463 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 08:30:23.731999   10463 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 08:30:23.732328   10463 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 08:30:23.732614   10463 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 08:30:23.733059   10463 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 08:30:23.732624   10463 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 08:30:23.732963   10463 addons.go:238] Setting addon default-storageclass=true in "addons-631036"
	I1025 08:30:23.733295   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.732965   10463 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-631036"
	I1025 08:30:23.733390   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:23.733469   10463 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 08:30:23.733479   10463 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 08:30:23.733488   10463 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 08:30:23.734985   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 08:30:23.733601   10463 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 08:30:23.734602   10463 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 08:30:23.735130   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 08:30:23.735434   10463 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 08:30:23.735471   10463 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:30:23.736267   10463 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 08:30:23.736286   10463 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 08:30:23.736329   10463 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 08:30:23.736658   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 08:30:23.737174   10463 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 08:30:23.737269   10463 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 08:30:23.737548   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 08:30:23.737311   10463 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 08:30:23.737655   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 08:30:23.737980   10463 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 08:30:23.738168   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.738718   10463 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 08:30:23.738736   10463 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 08:30:23.739655   10463 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 08:30:23.739672   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 08:30:23.739675   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.740177   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.740209   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.740295   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.740749   10463 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 08:30:23.740767   10463 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 08:30:23.740925   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:23.741467   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.741498   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.741811   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.742206   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.742260   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.742316   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:23.743093   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:23.743493   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.743521   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.744052   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:23.744404   10463 out.go:179]   - Using image docker.io/busybox:stable
	I1025 08:30:23.744485   10463 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 08:30:23.745022   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.745884   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.745985   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.746015   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.746139   10463 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 08:30:23.746159   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 08:30:23.746580   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.746620   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:23.747132   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.747164   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.747490   10463 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 08:30:23.747498   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:23.747697   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.748082   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.748158   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.748195   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.748666   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:23.749139   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.749170   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.749377   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.749438   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.749458   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.749473   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.749525   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:23.750365   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.750381   10463 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 08:30:23.750382   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:23.750408   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.750393   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.750722   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.750394   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.750395   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.751005   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:23.751205   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:23.751262   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.751297   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.751513   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:23.751709   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.751739   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.751922   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:23.752724   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.753126   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.753148   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.753323   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:23.753476   10463 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 08:30:23.755130   10463 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 08:30:23.756291   10463 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 08:30:23.756314   10463 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 08:30:23.759017   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.759409   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:23.759438   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:23.759618   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	W1025 08:30:24.149670   10463 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:45356->192.168.39.24:22: read: connection reset by peer
	I1025 08:30:24.149698   10463 retry.go:31] will retry after 126.652669ms: ssh: handshake failed: read tcp 192.168.39.1:45356->192.168.39.24:22: read: connection reset by peer
	I1025 08:30:24.674967   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 08:30:24.750161   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 08:30:24.760436   10463 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 08:30:24.760456   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 08:30:24.838632   10463 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:24.838652   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 08:30:24.864773   10463 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 08:30:24.864793   10463 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 08:30:24.864949   10463 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 08:30:24.864976   10463 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 08:30:24.912700   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 08:30:24.917528   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 08:30:24.951561   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 08:30:24.972820   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 08:30:24.975250   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 08:30:25.001694   10463 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 08:30:25.001723   10463 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 08:30:25.046097   10463 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 08:30:25.046125   10463 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 08:30:25.050834   10463 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.332662616s)
	I1025 08:30:25.050917   10463 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.325750231s)
	I1025 08:30:25.051011   10463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:30:25.051017   10463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 08:30:25.210679   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 08:30:25.221126   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 08:30:25.230205   10463 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 08:30:25.230263   10463 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 08:30:25.412545   10463 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 08:30:25.412584   10463 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 08:30:25.419332   10463 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 08:30:25.419359   10463 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 08:30:25.438132   10463 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 08:30:25.438157   10463 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 08:30:25.440644   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:25.480010   10463 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 08:30:25.480031   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 08:30:25.658443   10463 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 08:30:25.658477   10463 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 08:30:25.689303   10463 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 08:30:25.689336   10463 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 08:30:25.752066   10463 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 08:30:25.752092   10463 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 08:30:25.760662   10463 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 08:30:25.760685   10463 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 08:30:25.795722   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 08:30:26.007912   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 08:30:26.019467   10463 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 08:30:26.019491   10463 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 08:30:26.083668   10463 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 08:30:26.083693   10463 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 08:30:26.091730   10463 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 08:30:26.091752   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 08:30:26.376734   10463 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:30:26.376757   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 08:30:26.414502   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 08:30:26.445597   10463 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 08:30:26.445631   10463 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 08:30:26.645098   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.970089507s)
	I1025 08:30:26.757952   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:30:26.857543   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.107343906s)
	I1025 08:30:26.857646   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.944915549s)
	I1025 08:30:26.980342   10463 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 08:30:26.980372   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 08:30:27.556579   10463 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 08:30:27.556601   10463 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 08:30:27.932089   10463 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 08:30:27.932114   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 08:30:28.298464   10463 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 08:30:28.298496   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 08:30:28.785072   10463 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 08:30:28.785104   10463 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 08:30:29.284463   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 08:30:30.342283   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.424712998s)
	I1025 08:30:31.203782   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.252168193s)
	I1025 08:30:31.203954   10463 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 08:30:31.207066   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:31.207633   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:31.207670   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:31.207922   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:31.759938   10463 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 08:30:32.021815   10463 addons.go:238] Setting addon gcp-auth=true in "addons-631036"
	I1025 08:30:32.021883   10463 host.go:66] Checking if "addons-631036" exists ...
	I1025 08:30:32.024391   10463 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 08:30:32.027337   10463 main.go:141] libmachine: domain addons-631036 has defined MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:32.027885   10463 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:3b:0f", ip: ""} in network mk-addons-631036: {Iface:virbr1 ExpiryTime:2025-10-25 09:29:56 +0000 UTC Type:0 Mac:52:54:00:04:3b:0f Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-631036 Clientid:01:52:54:00:04:3b:0f}
	I1025 08:30:32.027924   10463 main.go:141] libmachine: domain addons-631036 has defined IP address 192.168.39.24 and MAC address 52:54:00:04:3b:0f in network mk-addons-631036
	I1025 08:30:32.028213   10463 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/addons-631036/id_rsa Username:docker}
	I1025 08:30:33.169903   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.197031057s)
	I1025 08:30:33.169944   10463 addons.go:479] Verifying addon ingress=true in "addons-631036"
	I1025 08:30:33.169993   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.194708259s)
	I1025 08:30:33.170124   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.959418667s)
	I1025 08:30:33.170028   10463 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.118997612s)
	I1025 08:30:33.170213   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.949054705s)
	I1025 08:30:33.170062   10463 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.119024231s)
	I1025 08:30:33.170268   10463 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1025 08:30:33.170337   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.729652016s)
	I1025 08:30:33.170364   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.37461001s)
	W1025 08:30:33.170365   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:33.170381   10463 addons.go:479] Verifying addon registry=true in "addons-631036"
	I1025 08:30:33.170390   10463 retry.go:31] will retry after 161.771319ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:33.170476   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.162532812s)
	I1025 08:30:33.170490   10463 addons.go:479] Verifying addon metrics-server=true in "addons-631036"
	I1025 08:30:33.170580   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.756038893s)
	I1025 08:30:33.170993   10463 node_ready.go:35] waiting up to 6m0s for node "addons-631036" to be "Ready" ...
	I1025 08:30:33.173412   10463 out.go:179] * Verifying ingress addon...
	I1025 08:30:33.173434   10463 out.go:179] * Verifying registry addon...
	I1025 08:30:33.173441   10463 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-631036 service yakd-dashboard -n yakd-dashboard
	
	I1025 08:30:33.175592   10463 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 08:30:33.175757   10463 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 08:30:33.223701   10463 node_ready.go:49] node "addons-631036" is "Ready"
	I1025 08:30:33.223746   10463 node_ready.go:38] duration metric: took 52.724214ms for node "addons-631036" to be "Ready" ...
	I1025 08:30:33.223765   10463 api_server.go:52] waiting for apiserver process to appear ...
	I1025 08:30:33.223826   10463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:30:33.241332   10463 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 08:30:33.241365   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:33.241332   10463 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 08:30:33.241397   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 08:30:33.311424   10463 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1025 08:30:33.333215   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:33.479206   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.721212928s)
	W1025 08:30:33.479276   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 08:30:33.479303   10463 retry.go:31] will retry after 201.571306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 08:30:33.681396   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:30:33.709556   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:33.710027   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:33.712260   10463 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-631036" context rescaled to 1 replicas
	I1025 08:30:34.203629   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:34.205145   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:34.624502   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.339995599s)
	I1025 08:30:34.624552   10463 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-631036"
	I1025 08:30:34.624555   10463 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.600132054s)
	I1025 08:30:34.624601   10463 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.400753651s)
	I1025 08:30:34.624636   10463 api_server.go:72] duration metric: took 10.906411211s to wait for apiserver process to appear ...
	I1025 08:30:34.624701   10463 api_server.go:88] waiting for apiserver healthz status ...
	I1025 08:30:34.624725   10463 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I1025 08:30:34.626800   10463 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 08:30:34.626853   10463 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:30:34.628452   10463 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 08:30:34.629353   10463 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 08:30:34.629746   10463 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 08:30:34.629769   10463 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 08:30:34.693510   10463 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 08:30:34.693545   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:34.708218   10463 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I1025 08:30:34.712385   10463 api_server.go:141] control plane version: v1.34.1
	I1025 08:30:34.712416   10463 api_server.go:131] duration metric: took 87.706392ms to wait for apiserver health ...
	I1025 08:30:34.712428   10463 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 08:30:34.760373   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:34.760492   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:34.762924   10463 system_pods.go:59] 20 kube-system pods found
	I1025 08:30:34.762966   10463 system_pods.go:61] "amd-gpu-device-plugin-frvrc" [201f5833-8bf6-475d-82b1-c927a3c7317b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 08:30:34.762975   10463 system_pods.go:61] "coredns-66bc5c9577-8mtlq" [5d8ef08e-3e63-4391-b058-8567251dc2f6] Running
	I1025 08:30:34.762983   10463 system_pods.go:61] "coredns-66bc5c9577-wk56k" [1147dfe5-42e8-493d-b71e-b18c2dccea1a] Running
	I1025 08:30:34.763000   10463 system_pods.go:61] "csi-hostpath-attacher-0" [1300984c-bdb1-4a67-ad5f-38678737bd63] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:30:34.763019   10463 system_pods.go:61] "csi-hostpath-resizer-0" [cc67468a-08d6-4bfc-8f9f-034995939f82] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 08:30:34.763034   10463 system_pods.go:61] "csi-hostpathplugin-zf5nw" [263033f3-4c81-4830-bd0e-0c77d25821c6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 08:30:34.763047   10463 system_pods.go:61] "etcd-addons-631036" [5163e635-4efb-4129-86e9-b4cceeca0896] Running
	I1025 08:30:34.763062   10463 system_pods.go:61] "kube-apiserver-addons-631036" [27881d61-62f0-46a2-b3c6-7b2dcb073b61] Running
	I1025 08:30:34.763071   10463 system_pods.go:61] "kube-controller-manager-addons-631036" [6d8cf523-be28-41de-8226-906150d433e4] Running
	I1025 08:30:34.763079   10463 system_pods.go:61] "kube-ingress-dns-minikube" [b8340127-56b4-4638-b7ca-1a5815a313cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 08:30:34.763087   10463 system_pods.go:61] "kube-proxy-nzdhm" [d3cd3e35-b924-472f-9218-233cdce69396] Running
	I1025 08:30:34.763093   10463 system_pods.go:61] "kube-scheduler-addons-631036" [f8cea14e-4cb3-4341-910c-a1fea712966f] Running
	I1025 08:30:34.763105   10463 system_pods.go:61] "metrics-server-85b7d694d7-4b2tp" [060cbc46-1bf9-48ba-b6eb-9f0fe9e1a912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 08:30:34.763116   10463 system_pods.go:61] "nvidia-device-plugin-daemonset-65m2r" [d049181d-68c1-439c-bfbb-61eff9e986fa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 08:30:34.763130   10463 system_pods.go:61] "registry-6b586f9694-h2dlk" [deafd51c-1def-42f4-bf1d-433def2f97c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 08:30:34.763148   10463 system_pods.go:61] "registry-creds-764b6fb674-kzcnh" [2d445e86-d667-49cf-a274-1872cf7d57a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:30:34.763159   10463 system_pods.go:61] "registry-proxy-lfzv8" [44090c69-a71c-43ba-9342-a65d7cdcbea7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 08:30:34.763171   10463 system_pods.go:61] "snapshot-controller-7d9fbc56b8-k8kjc" [9bbac9c3-9506-4a94-8825-12d563a4ec5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:34.763188   10463 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rmtbf" [67657df2-ca6b-4f00-a043-b8fdf294e0b6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:34.763198   10463 system_pods.go:61] "storage-provisioner" [48ababc1-07e4-4d36-89b2-8a6c8d29de6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:30:34.763206   10463 system_pods.go:74] duration metric: took 50.772134ms to wait for pod list to return data ...
	I1025 08:30:34.763217   10463 default_sa.go:34] waiting for default service account to be created ...
	I1025 08:30:34.773772   10463 default_sa.go:45] found service account: "default"
	I1025 08:30:34.773809   10463 default_sa.go:55] duration metric: took 10.584761ms for default service account to be created ...
	I1025 08:30:34.773826   10463 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 08:30:34.786641   10463 system_pods.go:86] 20 kube-system pods found
	I1025 08:30:34.786679   10463 system_pods.go:89] "amd-gpu-device-plugin-frvrc" [201f5833-8bf6-475d-82b1-c927a3c7317b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 08:30:34.786694   10463 system_pods.go:89] "coredns-66bc5c9577-8mtlq" [5d8ef08e-3e63-4391-b058-8567251dc2f6] Running
	I1025 08:30:34.786704   10463 system_pods.go:89] "coredns-66bc5c9577-wk56k" [1147dfe5-42e8-493d-b71e-b18c2dccea1a] Running
	I1025 08:30:34.786712   10463 system_pods.go:89] "csi-hostpath-attacher-0" [1300984c-bdb1-4a67-ad5f-38678737bd63] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:30:34.786723   10463 system_pods.go:89] "csi-hostpath-resizer-0" [cc67468a-08d6-4bfc-8f9f-034995939f82] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 08:30:34.786732   10463 system_pods.go:89] "csi-hostpathplugin-zf5nw" [263033f3-4c81-4830-bd0e-0c77d25821c6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 08:30:34.786737   10463 system_pods.go:89] "etcd-addons-631036" [5163e635-4efb-4129-86e9-b4cceeca0896] Running
	I1025 08:30:34.786743   10463 system_pods.go:89] "kube-apiserver-addons-631036" [27881d61-62f0-46a2-b3c6-7b2dcb073b61] Running
	I1025 08:30:34.786753   10463 system_pods.go:89] "kube-controller-manager-addons-631036" [6d8cf523-be28-41de-8226-906150d433e4] Running
	I1025 08:30:34.786760   10463 system_pods.go:89] "kube-ingress-dns-minikube" [b8340127-56b4-4638-b7ca-1a5815a313cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 08:30:34.786764   10463 system_pods.go:89] "kube-proxy-nzdhm" [d3cd3e35-b924-472f-9218-233cdce69396] Running
	I1025 08:30:34.786767   10463 system_pods.go:89] "kube-scheduler-addons-631036" [f8cea14e-4cb3-4341-910c-a1fea712966f] Running
	I1025 08:30:34.786774   10463 system_pods.go:89] "metrics-server-85b7d694d7-4b2tp" [060cbc46-1bf9-48ba-b6eb-9f0fe9e1a912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 08:30:34.786782   10463 system_pods.go:89] "nvidia-device-plugin-daemonset-65m2r" [d049181d-68c1-439c-bfbb-61eff9e986fa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 08:30:34.786790   10463 system_pods.go:89] "registry-6b586f9694-h2dlk" [deafd51c-1def-42f4-bf1d-433def2f97c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 08:30:34.786802   10463 system_pods.go:89] "registry-creds-764b6fb674-kzcnh" [2d445e86-d667-49cf-a274-1872cf7d57a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:30:34.786809   10463 system_pods.go:89] "registry-proxy-lfzv8" [44090c69-a71c-43ba-9342-a65d7cdcbea7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 08:30:34.786827   10463 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k8kjc" [9bbac9c3-9506-4a94-8825-12d563a4ec5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:34.786837   10463 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rmtbf" [67657df2-ca6b-4f00-a043-b8fdf294e0b6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:30:34.786847   10463 system_pods.go:89] "storage-provisioner" [48ababc1-07e4-4d36-89b2-8a6c8d29de6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:30:34.786855   10463 system_pods.go:126] duration metric: took 13.023012ms to wait for k8s-apps to be running ...
	I1025 08:30:34.786865   10463 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 08:30:34.786917   10463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:30:34.891269   10463 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 08:30:34.891335   10463 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 08:30:35.098085   10463 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 08:30:35.098117   10463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 08:30:35.142761   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:35.173544   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 08:30:35.242943   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:35.245068   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:35.636347   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:35.680628   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:35.683787   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:36.139969   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:36.240967   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:36.241066   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:36.636285   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:36.683231   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:36.684933   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:36.944894   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.263442366s)
	I1025 08:30:36.944948   10463 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.158003257s)
	I1025 08:30:36.944978   10463 system_svc.go:56] duration metric: took 2.158109744s WaitForService to wait for kubelet
	I1025 08:30:36.944990   10463 kubeadm.go:586] duration metric: took 13.226764568s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:30:36.945022   10463 node_conditions.go:102] verifying NodePressure condition ...
	I1025 08:30:36.946729   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.613456018s)
	W1025 08:30:36.946767   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:36.946785   10463 retry.go:31] will retry after 411.387705ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:36.975838   10463 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 08:30:36.975870   10463 node_conditions.go:123] node cpu capacity is 2
	I1025 08:30:36.975880   10463 node_conditions.go:105] duration metric: took 30.851109ms to run NodePressure ...
	I1025 08:30:36.975891   10463 start.go:241] waiting for startup goroutines ...
	I1025 08:30:37.172191   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.998592015s)
	I1025 08:30:37.173586   10463 addons.go:479] Verifying addon gcp-auth=true in "addons-631036"
	I1025 08:30:37.175723   10463 out.go:179] * Verifying gcp-auth addon...
	I1025 08:30:37.178154   10463 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 08:30:37.196291   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:37.217903   10463 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 08:30:37.217926   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:37.217920   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:37.221209   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:37.358403   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:37.637520   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:37.687197   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:37.688584   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:37.688890   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:38.138159   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:38.183576   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:38.183711   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:38.186756   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:38.637032   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:38.687008   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:38.687127   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:38.687207   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:38.811716   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.453272214s)
	W1025 08:30:38.811752   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:38.811774   10463 retry.go:31] will retry after 375.905371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:39.136332   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:39.183472   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:39.185882   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:39.186807   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:39.187882   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:39.637596   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:39.684994   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:39.685953   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:39.687608   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:40.137909   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:40.183957   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:40.184301   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:40.187536   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:40.339312   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.151397345s)
	W1025 08:30:40.339363   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:40.339428   10463 retry.go:31] will retry after 1.094514967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:40.637012   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:40.683548   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:40.685179   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:40.685686   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:41.139864   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:41.239638   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:41.239740   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:41.239866   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:41.434186   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:41.634864   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:41.681133   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:41.681649   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:41.685408   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:42.133729   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:30:42.148753   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:42.148790   10463 retry.go:31] will retry after 1.882995844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:42.179013   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:42.179160   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:42.181755   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:42.636100   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:42.686740   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:42.689861   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:42.691770   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:43.136138   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:43.182385   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:43.185442   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:43.185643   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:43.634724   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:43.735107   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:43.735190   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:43.735540   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:44.032984   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:44.132658   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:44.185266   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:44.185459   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:44.187053   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:44.634608   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:44.682046   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:44.683625   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:44.683832   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:30:44.789499   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:44.789541   10463 retry.go:31] will retry after 2.403366064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:45.134322   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:45.180530   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:45.185229   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:45.185609   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:45.648496   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:45.682776   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:45.683293   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:45.683546   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:46.135279   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:46.182197   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:46.183390   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:46.183814   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:46.636345   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:46.685350   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:46.692876   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:46.693638   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:47.136969   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:47.189056   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:47.189272   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:47.190364   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:47.193549   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:47.637198   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:47.681509   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:47.682388   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:47.685052   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:48.136799   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:48.186935   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:48.187006   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:48.188132   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:48.445274   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.251681204s)
	W1025 08:30:48.445325   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:48.445345   10463 retry.go:31] will retry after 3.592234871s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:48.637197   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:48.686693   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:48.686717   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:48.687541   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:49.186095   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:49.186272   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:49.186391   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:49.187314   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:49.634321   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:49.681835   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:49.681848   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:49.683046   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:50.133612   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:50.179883   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:50.180546   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:50.181810   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:50.635269   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:50.736092   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:50.736148   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:50.736732   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:51.134263   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:51.180443   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:51.180599   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:51.181984   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:51.636034   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:51.683186   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:51.685881   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:51.686704   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:52.038297   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:52.136871   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:52.188752   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:52.189282   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:52.189376   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:52.638321   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:52.684397   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:52.684658   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:52.685762   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:53.137270   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:53.182823   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:53.182883   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:53.191182   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:53.240863   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.202521921s)
	W1025 08:30:53.240912   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:53.240936   10463 retry.go:31] will retry after 3.219637926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:53.634079   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:53.683089   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:53.683119   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:53.687630   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:54.134421   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:54.182994   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:54.183019   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:54.183094   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:54.635899   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:54.679537   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:54.681892   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:54.684397   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:55.134787   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:55.180999   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:55.182415   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:55.182615   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:55.643462   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:55.687677   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:55.689286   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:55.692255   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:56.134140   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:56.377339   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:56.377795   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:56.378135   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:56.461394   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:56.634936   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:56.682968   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:56.687041   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:56.687742   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:57.135075   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:57.193305   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:57.198194   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:57.199028   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:57.637044   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:57.682101   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:57.682617   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:57.684136   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:57.713347   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.251894952s)
	W1025 08:30:57.713397   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:57.713421   10463 retry.go:31] will retry after 6.487569446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:30:58.134368   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:58.183040   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:58.183308   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:58.183637   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:58.637117   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:58.693420   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:58.693470   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:58.693617   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:59.135677   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:59.237124   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:59.237439   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:30:59.237590   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:59.638457   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:30:59.697903   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:30:59.698150   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:30:59.698160   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:00.134020   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:00.183585   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:00.185505   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:00.186004   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:00.638413   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:00.680932   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:00.682169   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:00.683065   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:01.134584   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:01.181017   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:01.182978   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:01.183524   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:01.634281   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:01.683221   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:01.683285   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:01.687893   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:02.134593   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:02.184343   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:02.185136   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:02.185230   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:02.634510   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:02.736270   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:02.736374   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:02.736378   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:03.135920   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:03.180155   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:03.181790   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:03.183902   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:03.633888   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:03.679433   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:03.680276   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:03.681622   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:04.136944   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:04.179832   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:04.180070   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:04.182843   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:04.202085   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:04.634796   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:04.685464   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:04.685467   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:04.687400   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:05.133652   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:05.180892   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:05.181952   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:05.185074   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:05.253092   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.050967222s)
	W1025 08:31:05.253138   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:05.253158   10463 retry.go:31] will retry after 9.611661127s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:05.635850   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:05.682702   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:05.684623   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:05.684951   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:06.135808   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:06.181303   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:06.182800   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:06.191605   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:06.632905   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:06.695445   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:06.695733   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:06.695866   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:07.135158   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:07.187891   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:07.187917   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:07.188426   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:07.636927   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:07.679698   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:07.679703   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:07.680864   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:08.137966   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:08.179379   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:08.180468   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:08.181432   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:08.633519   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:08.681183   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:08.681405   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:08.682130   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:09.134806   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:09.180701   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:09.181033   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:09.183852   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:09.638834   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:09.681175   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:09.684743   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:09.686200   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:10.136650   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:10.183901   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:10.184134   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:10.188470   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:10.637219   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:10.683229   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:10.683258   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:10.683735   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:11.137582   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:11.190192   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:11.190460   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:11.192032   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:11.634859   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:11.679822   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:11.681714   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:11.683266   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:12.135497   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:12.181719   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:12.183887   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:12.184413   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:12.633656   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:12.679872   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:12.680183   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:12.682554   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:13.135401   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:13.236962   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:13.237058   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:13.237146   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:13.634100   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:13.679696   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:13.680994   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:13.682157   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:14.135139   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:14.178855   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:14.182979   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:14.183223   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:14.633665   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:14.679292   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:14.680645   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:14.681994   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:14.865427   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:15.136562   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:15.179817   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:15.185511   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:15.186155   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:15.634784   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:15.681450   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:15.685551   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:15.688108   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:31:15.734177   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:15.734215   10463 retry.go:31] will retry after 9.851270621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:16.136452   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:16.184796   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:16.186441   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:16.186492   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:16.634126   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:16.683408   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:16.685349   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:16.686033   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:17.135130   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:17.183523   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:17.185429   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:17.185721   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:17.634923   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:17.680675   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:17.682904   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:17.683308   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:18.134584   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:18.181740   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:18.182904   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:18.184812   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:18.634136   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:18.679462   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:18.682234   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:18.682414   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:19.140660   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:19.180260   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:19.181589   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:19.182997   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:19.633603   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:19.735476   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:19.736807   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:19.737770   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:20.136917   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:20.182628   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:20.185476   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:20.186940   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:20.638988   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:20.679820   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:20.681378   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:20.681934   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:21.134229   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:21.179118   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:21.179887   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:21.181613   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:21.634922   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:21.682855   10463 kapi.go:107] duration metric: took 48.507262271s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 08:31:21.683090   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:21.686412   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:22.132844   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:22.180651   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:22.182217   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:22.633920   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:22.682842   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:22.684978   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:23.135684   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:23.182499   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:23.186255   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:23.633760   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:23.681056   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:23.682411   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:24.133534   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:24.184702   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:24.189084   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:24.638984   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:24.680976   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:24.686411   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:25.134228   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:25.179432   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:25.182625   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:25.586029   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:25.635868   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:25.683125   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:25.683222   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:26.134662   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:26.182502   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:26.187949   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:26.634871   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:26.682525   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:26.683433   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:26.995723   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.409659939s)
	W1025 08:31:26.995762   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:26.995779   10463 retry.go:31] will retry after 23.000661637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:27.134930   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:27.181899   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:27.183956   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:27.634742   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:27.680485   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:27.681996   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:28.135994   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:28.179604   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:28.181339   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:28.634358   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:28.681260   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:28.681994   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:29.140682   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:29.243846   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:29.244518   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:29.634492   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:29.685362   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:29.685770   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:30.134120   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:30.181998   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:30.192396   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:30.640176   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:30.680750   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:30.682558   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:31.136095   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:31.184588   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:31.184682   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:31.638578   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:31.688389   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:31.688981   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:32.134700   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:32.180059   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:32.182494   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:32.638852   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:32.740178   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:32.740575   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:33.134145   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:33.182128   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:33.183224   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:33.633689   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:33.688207   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:33.691127   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:34.135309   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:34.182276   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:34.186107   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:34.633296   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:34.680182   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:34.684122   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:35.139041   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:35.184261   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:35.186305   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:35.633203   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:35.733785   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:35.734010   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:36.133727   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:36.190841   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:36.190880   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:36.635277   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:36.680343   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:36.681791   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:37.134457   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:37.181683   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:37.184994   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:37.633368   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:37.682453   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:37.684359   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:38.137051   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:38.180784   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:38.187823   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:38.634232   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:38.684379   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:38.685231   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:39.134669   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:39.186644   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:39.187632   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:39.634016   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:39.679958   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:39.683785   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:40.136294   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:40.183011   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:40.186375   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:40.636720   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:40.684141   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:40.684295   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:41.139213   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:41.183053   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:41.191216   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:41.644092   10463 kapi.go:107] duration metric: took 1m7.014660887s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 08:31:41.680422   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:41.682918   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:42.181552   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:42.187551   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:42.680050   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:42.685527   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:43.185397   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:43.190024   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:43.682424   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:43.684493   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:44.184250   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:44.184303   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:44.683715   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:44.684803   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:45.303304   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:45.303560   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:45.682349   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:45.682898   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:46.181955   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:46.183118   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:46.683782   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:46.685045   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:47.189750   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:47.190531   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:47.680187   10463 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:47.682612   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:48.180159   10463 kapi.go:107] duration metric: took 1m15.004397568s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 08:31:48.183123   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:48.699126   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:49.181859   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:49.683492   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:49.996881   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:50.182514   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:50.683726   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:51.103169   10463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.106243287s)
	W1025 08:31:51.103207   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:51.103247   10463 retry.go:31] will retry after 18.652136107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:51.183569   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:51.682553   10463 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:52.184187   10463 kapi.go:107] duration metric: took 1m15.006032649s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 08:31:52.186088   10463 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-631036 cluster.
	I1025 08:31:52.187578   10463 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 08:31:52.189012   10463 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 08:32:09.756474   10463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 08:32:10.482803   10463 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 08:32:10.482904   10463 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 08:32:10.485391   10463 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, ingress-dns, storage-provisioner, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1025 08:32:10.487232   10463 addons.go:514] duration metric: took 1m46.769080493s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin registry-creds ingress-dns storage-provisioner cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1025 08:32:10.487301   10463 start.go:246] waiting for cluster config update ...
	I1025 08:32:10.487323   10463 start.go:255] writing updated cluster config ...
	I1025 08:32:10.487604   10463 ssh_runner.go:195] Run: rm -f paused
	I1025 08:32:10.493963   10463 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 08:32:10.498263   10463 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wk56k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:10.504470   10463 pod_ready.go:94] pod "coredns-66bc5c9577-wk56k" is "Ready"
	I1025 08:32:10.504495   10463 pod_ready.go:86] duration metric: took 6.173199ms for pod "coredns-66bc5c9577-wk56k" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:10.507234   10463 pod_ready.go:83] waiting for pod "etcd-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:10.513050   10463 pod_ready.go:94] pod "etcd-addons-631036" is "Ready"
	I1025 08:32:10.513086   10463 pod_ready.go:86] duration metric: took 5.808461ms for pod "etcd-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:10.515221   10463 pod_ready.go:83] waiting for pod "kube-apiserver-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:10.520928   10463 pod_ready.go:94] pod "kube-apiserver-addons-631036" is "Ready"
	I1025 08:32:10.520964   10463 pod_ready.go:86] duration metric: took 5.702304ms for pod "kube-apiserver-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:10.525562   10463 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:10.898801   10463 pod_ready.go:94] pod "kube-controller-manager-addons-631036" is "Ready"
	I1025 08:32:10.898828   10463 pod_ready.go:86] duration metric: took 373.239167ms for pod "kube-controller-manager-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:11.099768   10463 pod_ready.go:83] waiting for pod "kube-proxy-nzdhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:11.498928   10463 pod_ready.go:94] pod "kube-proxy-nzdhm" is "Ready"
	I1025 08:32:11.498953   10463 pod_ready.go:86] duration metric: took 399.159654ms for pod "kube-proxy-nzdhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:11.699075   10463 pod_ready.go:83] waiting for pod "kube-scheduler-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:12.097997   10463 pod_ready.go:94] pod "kube-scheduler-addons-631036" is "Ready"
	I1025 08:32:12.098024   10463 pod_ready.go:86] duration metric: took 398.907605ms for pod "kube-scheduler-addons-631036" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:12.098054   10463 pod_ready.go:40] duration metric: took 1.604055392s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 08:32:12.142324   10463 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 08:32:12.144374   10463 out.go:179] * Done! kubectl is now configured to use "addons-631036" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.857620131Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c485411-de62-4ec6-9b92-5ea9c668d8bd name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.858913189Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba649ef510c320f775909d4460e56a36b46f3dd31e59195bf71be7cff2f62a8b,PodSandboxId:c52e3bd825ede126abab9b4d6468adc65b010ad5689b8128806a26f2a6e31914,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761381180140542495,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c09d0ba-4bcf-41ee-a6df-0ac2dfc801a8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f2511eac37cc062cd57134739c7258c0d93ee71f123a7973659c9bdcbb2efb,PodSandboxId:05b61984d2b069f30739e6da42a76fe9219b7c33cf6487494aacbca1e8e271b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761381137241231973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f2f6f1d-47e1-4920-87aa-ea653b62155e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668d1cf35f06cbd179f51d51212cb1bea413f03a7041ae3de44d0fa8ee001d0e,PodSandboxId:a441975f9c54e4ab33d49dc47071c868f1c084d831eb92a69c10061efd52104f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761381107092340480,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-mfkds,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50e82f06-39c8-4f72-a0a5-6b4703790748,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e3d9d99a48fa27dd71568ac5165f17fac6ae24ac5b9895d27349cc8811fbb173,PodSandboxId:ed7501f8db956f8ab57934a1b3de2400f821fc9352275b3ad2c886c61fccb16d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761381088955281135,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rmrl2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bb255d16-2e85-41e8-9ed9-a35ce6b6acc9,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b7d625c06ccc43ff637a53c6aef306e50d0948f7c40895b1b8476ffcfa1535,PodSandboxId:edaf6cd0334deabc8ea039f6c3ad31e25477396185fca88c4969c154f9a15a3c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761381088853541935,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-29xlb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50c6c028-9478-4d0e-b417-21f918310c81,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c32668b06a8dda97480854551a3f509075d440bfcf9c2837afc7a9eeabe0c9,PodSandboxId:bb605a2e21acae12f5471fa897aa6f3b0927b5d2cda229731b0de557b2af9d3f,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761381073172437309,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gg64c,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 274413d3-cf62-4e8a-a462-c34623a92df7,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d454023a5e021e53842e04e29085483d3903d0cd3904d318a4795e26495578ae,PodSandboxId:b879b682b9e49d6c4d61e3a94fe3a99f8c192e266af6518020571712be146631,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761381060198237535,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8340127-56b4-4638-b7ca-1a5815a313cc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721e2f83faa260a0aa770486fb488a646a740f18306894e3c98ad4cfab67ff2a,PodSandboxId:c1d7a85a1a0663dccb16e2d1c8c4023f1d1d232f349461d
c32f7d5326db133db,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761381043605417893,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-frvrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201f5833-8bf6-475d-82b1-c927a3c7317b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3949e1e589d6de8f336d9514c13a17e20b8e77323cd06ebc8c00867db7c45eb4,PodSandboxId:1672536
cad4dd9773bcf09f09b181a78fbd046231903bdc49d51247632bf214d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761381035579365038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ababc1-07e4-4d36-89b2-8a6c8d29de6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e76d1b0e30682069f4cd18ed6791c5929f8b1dac8174f741c4d364a982c223e,PodSandboxId:ac9360fccaa24d400f5
b9b3a72030c724d9a225b0b1e521fc0eff7041ca03d52,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761381024888422175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wk56k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1147dfe5-42e8-493d-b71e-b18c2dccea1a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1290b3d741f76073327b7413370692cb5c197f53e359b26187abcb0025aeffbb,PodSandboxId:3309bf2556203c8c0193de8cdcb28ef8c8dfd5a80032b5a824cb6bf4e91fdde3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761381023562519757,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nzdhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3cd3e35-b924-472f-9218-233cdce69396,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4570839da44e8b3a2fa055fe684460608e271192fd6c44ac946666456795adcc,PodSandboxId:498a148b1fa089187f01fbd98131c676133adc17c66049a4afb37ef4b8e72b79,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761381012442655834,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16e8a53d09a7fbb3a5e534717096a964,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9cf074fa9b3410489789b50b5dde599ca5b43a9ac50ecb4c836d80d3338c955,PodSandboxId:44ec30afdfe0e08068898f9d8da354d73493db2d38636ad705a4c7ea0ebe7f85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761381012465811659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 690e6a5b2418f0ca9b6f3c0b414ff231,},Annotations:map[string]str
ing{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:577f1fc6b5aa6028e9ec35cacc97e7a58a56550d7b5e0161bce35c86154ebe5d,PodSandboxId:dfa1994a628a0947a42108e7de8545ea85ae551aaf7fd5d96bdd8fac5fe92db9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761381012407959375,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons
-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe261af37f4a81df64519dd9c14a22d0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:012875c13f0610cc9ba2625b34d3309c83847bdd8a40fda7f3741223f8077fe4,PodSandboxId:18ce3d8a538afdbfe1a0e6410d96f13b142763fa6ff6f6dfb71d21d8f2f0d8fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761381012384741295,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e721612489f67f85d112768938e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c485411-de62-4ec6-9b92-5ea9c668d8bd name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.861723811Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 621636ff-a5a1-4705-859c-3adbd54cbb54,},},}" file="otel-collector/interceptors.go:62" id=9d220979-08c6-4db7-8a92-083611fd26f9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.861922986Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6e39dcdd38eabd3fef38735b36482eff8731fe2c886ef1dc2fda1ce7b638ab3c,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-m9rs7,Uid:621636ff-a5a1-4705-859c-3adbd54cbb54,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761381322984937941,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-m9rs7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 621636ff-a5a1-4705-859c-3adbd54cbb54,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-25T08:35:22.660871119Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9d220979-08c6-4db7-8a92-083611fd26f9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.862722165Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:6e39dcdd38eabd3fef38735b36482eff8731fe2c886ef1dc2fda1ce7b638ab3c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=319f1bfd-a7ae-4860-81e2-17811ecec85d name=/runtime.v1.RuntimeService/PodSandboxStatus
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.863440870Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:6e39dcdd38eabd3fef38735b36482eff8731fe2c886ef1dc2fda1ce7b638ab3c,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-m9rs7,Uid:621636ff-a5a1-4705-859c-3adbd54cbb54,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761381322984937941,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:&UserNamespace{Mode:NODE,Uids:[]*IDMapping{},Gids:[]*IDMapping{},},},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-m9rs7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 621636ff-a5a1-4705-859c-3adbd54cbb54,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen:
2025-10-25T08:35:22.660871119Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=319f1bfd-a7ae-4860-81e2-17811ecec85d name=/runtime.v1.RuntimeService/PodSandboxStatus
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.867338405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 621636ff-a5a1-4705-859c-3adbd54cbb54,},},}" file="otel-collector/interceptors.go:62" id=ed16043e-e5c2-4252-a570-410c61708034 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.867440902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed16043e-e5c2-4252-a570-410c61708034 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.867491365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ed16043e-e5c2-4252-a570-410c61708034 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.886911774Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.888260947Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.901619388Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=324d4d13-655f-439c-b083-8c4455ad7ba1 name=/runtime.v1.RuntimeService/Version
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.901792066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=324d4d13-655f-439c-b083-8c4455ad7ba1 name=/runtime.v1.RuntimeService/Version
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.903296472Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0ab21ea-5bd2-479b-8cbd-c86562a4f9f3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.904586925Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761381323904559787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588896,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0ab21ea-5bd2-479b-8cbd-c86562a4f9f3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.905418915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=daa0bc1e-e644-44f1-9b06-3a7c9137a586 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.905715317Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=daa0bc1e-e644-44f1-9b06-3a7c9137a586 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.906340257Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba649ef510c320f775909d4460e56a36b46f3dd31e59195bf71be7cff2f62a8b,PodSandboxId:c52e3bd825ede126abab9b4d6468adc65b010ad5689b8128806a26f2a6e31914,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761381180140542495,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c09d0ba-4bcf-41ee-a6df-0ac2dfc801a8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f2511eac37cc062cd57134739c7258c0d93ee71f123a7973659c9bdcbb2efb,PodSandboxId:05b61984d2b069f30739e6da42a76fe9219b7c33cf6487494aacbca1e8e271b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761381137241231973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f2f6f1d-47e1-4920-87aa-ea653b62155e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668d1cf35f06cbd179f51d51212cb1bea413f03a7041ae3de44d0fa8ee001d0e,PodSandboxId:a441975f9c54e4ab33d49dc47071c868f1c084d831eb92a69c10061efd52104f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761381107092340480,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-mfkds,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50e82f06-39c8-4f72-a0a5-6b4703790748,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e3d9d99a48fa27dd71568ac5165f17fac6ae24ac5b9895d27349cc8811fbb173,PodSandboxId:ed7501f8db956f8ab57934a1b3de2400f821fc9352275b3ad2c886c61fccb16d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761381088955281135,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rmrl2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bb255d16-2e85-41e8-9ed9-a35ce6b6acc9,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b7d625c06ccc43ff637a53c6aef306e50d0948f7c40895b1b8476ffcfa1535,PodSandboxId:edaf6cd0334deabc8ea039f6c3ad31e25477396185fca88c4969c154f9a15a3c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761381088853541935,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-29xlb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50c6c028-9478-4d0e-b417-21f918310c81,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c32668b06a8dda97480854551a3f509075d440bfcf9c2837afc7a9eeabe0c9,PodSandboxId:bb605a2e21acae12f5471fa897aa6f3b0927b5d2cda229731b0de557b2af9d3f,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761381073172437309,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gg64c,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 274413d3-cf62-4e8a-a462-c34623a92df7,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d454023a5e021e53842e04e29085483d3903d0cd3904d318a4795e26495578ae,PodSandboxId:b879b682b9e49d6c4d61e3a94fe3a99f8c192e266af6518020571712be146631,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761381060198237535,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8340127-56b4-4638-b7ca-1a5815a313cc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721e2f83faa260a0aa770486fb488a646a740f18306894e3c98ad4cfab67ff2a,PodSandboxId:c1d7a85a1a0663dccb16e2d1c8c4023f1d1d232f349461d
c32f7d5326db133db,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761381043605417893,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-frvrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201f5833-8bf6-475d-82b1-c927a3c7317b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3949e1e589d6de8f336d9514c13a17e20b8e77323cd06ebc8c00867db7c45eb4,PodSandboxId:1672536
cad4dd9773bcf09f09b181a78fbd046231903bdc49d51247632bf214d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761381035579365038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ababc1-07e4-4d36-89b2-8a6c8d29de6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e76d1b0e30682069f4cd18ed6791c5929f8b1dac8174f741c4d364a982c223e,PodSandboxId:ac9360fccaa24d400f5
b9b3a72030c724d9a225b0b1e521fc0eff7041ca03d52,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761381024888422175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wk56k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1147dfe5-42e8-493d-b71e-b18c2dccea1a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1290b3d741f76073327b7413370692cb5c197f53e359b26187abcb0025aeffbb,PodSandboxId:3309bf2556203c8c0193de8cdcb28ef8c8dfd5a80032b5a824cb6bf4e91fdde3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761381023562519757,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nzdhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3cd3e35-b924-472f-9218-233cdce69396,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4570839da44e8b3a2fa055fe684460608e271192fd6c44ac946666456795adcc,PodSandboxId:498a148b1fa089187f01fbd98131c676133adc17c66049a4afb37ef4b8e72b79,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761381012442655834,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16e8a53d09a7fbb3a5e534717096a964,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9cf074fa9b3410489789b50b5dde599ca5b43a9ac50ecb4c836d80d3338c955,PodSandboxId:44ec30afdfe0e08068898f9d8da354d73493db2d38636ad705a4c7ea0ebe7f85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761381012465811659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 690e6a5b2418f0ca9b6f3c0b414ff231,},Annotations:map[string]str
ing{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:577f1fc6b5aa6028e9ec35cacc97e7a58a56550d7b5e0161bce35c86154ebe5d,PodSandboxId:dfa1994a628a0947a42108e7de8545ea85ae551aaf7fd5d96bdd8fac5fe92db9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761381012407959375,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons
-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe261af37f4a81df64519dd9c14a22d0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:012875c13f0610cc9ba2625b34d3309c83847bdd8a40fda7f3741223f8077fe4,PodSandboxId:18ce3d8a538afdbfe1a0e6410d96f13b142763fa6ff6f6dfb71d21d8f2f0d8fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761381012384741295,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e721612489f67f85d112768938e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=daa0bc1e-e644-44f1-9b06-3a7c9137a586 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.946719615Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=77cbe99c-68e3-4cc9-811c-8bb1fb285650 name=/runtime.v1.RuntimeService/Version
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.946811601Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=77cbe99c-68e3-4cc9-811c-8bb1fb285650 name=/runtime.v1.RuntimeService/Version
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.948441725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9ec5343-7664-4cd4-98d6-9df2fb3195da name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.950177285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761381323950147190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588896,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9ec5343-7664-4cd4-98d6-9df2fb3195da name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.950960713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3468038-10e1-4cd2-a28c-7b93cc6e6403 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.951286998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3468038-10e1-4cd2-a28c-7b93cc6e6403 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 08:35:23 addons-631036 crio[809]: time="2025-10-25 08:35:23.951872900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba649ef510c320f775909d4460e56a36b46f3dd31e59195bf71be7cff2f62a8b,PodSandboxId:c52e3bd825ede126abab9b4d6468adc65b010ad5689b8128806a26f2a6e31914,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761381180140542495,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2c09d0ba-4bcf-41ee-a6df-0ac2dfc801a8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f2511eac37cc062cd57134739c7258c0d93ee71f123a7973659c9bdcbb2efb,PodSandboxId:05b61984d2b069f30739e6da42a76fe9219b7c33cf6487494aacbca1e8e271b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761381137241231973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f2f6f1d-47e1-4920-87aa-ea653b62155e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668d1cf35f06cbd179f51d51212cb1bea413f03a7041ae3de44d0fa8ee001d0e,PodSandboxId:a441975f9c54e4ab33d49dc47071c868f1c084d831eb92a69c10061efd52104f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761381107092340480,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-mfkds,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50e82f06-39c8-4f72-a0a5-6b4703790748,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e3d9d99a48fa27dd71568ac5165f17fac6ae24ac5b9895d27349cc8811fbb173,PodSandboxId:ed7501f8db956f8ab57934a1b3de2400f821fc9352275b3ad2c886c61fccb16d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761381088955281135,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rmrl2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bb255d16-2e85-41e8-9ed9-a35ce6b6acc9,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b7d625c06ccc43ff637a53c6aef306e50d0948f7c40895b1b8476ffcfa1535,PodSandboxId:edaf6cd0334deabc8ea039f6c3ad31e25477396185fca88c4969c154f9a15a3c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761381088853541935,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-29xlb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50c6c028-9478-4d0e-b417-21f918310c81,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c32668b06a8dda97480854551a3f509075d440bfcf9c2837afc7a9eeabe0c9,PodSandboxId:bb605a2e21acae12f5471fa897aa6f3b0927b5d2cda229731b0de557b2af9d3f,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761381073172437309,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gg64c,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 274413d3-cf62-4e8a-a462-c34623a92df7,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d454023a5e021e53842e04e29085483d3903d0cd3904d318a4795e26495578ae,PodSandboxId:b879b682b9e49d6c4d61e3a94fe3a99f8c192e266af6518020571712be146631,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761381060198237535,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8340127-56b4-4638-b7ca-1a5815a313cc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721e2f83faa260a0aa770486fb488a646a740f18306894e3c98ad4cfab67ff2a,PodSandboxId:c1d7a85a1a0663dccb16e2d1c8c4023f1d1d232f349461d
c32f7d5326db133db,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761381043605417893,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-frvrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201f5833-8bf6-475d-82b1-c927a3c7317b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3949e1e589d6de8f336d9514c13a17e20b8e77323cd06ebc8c00867db7c45eb4,PodSandboxId:1672536
cad4dd9773bcf09f09b181a78fbd046231903bdc49d51247632bf214d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761381035579365038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ababc1-07e4-4d36-89b2-8a6c8d29de6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e76d1b0e30682069f4cd18ed6791c5929f8b1dac8174f741c4d364a982c223e,PodSandboxId:ac9360fccaa24d400f5
b9b3a72030c724d9a225b0b1e521fc0eff7041ca03d52,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761381024888422175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wk56k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1147dfe5-42e8-493d-b71e-b18c2dccea1a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1290b3d741f76073327b7413370692cb5c197f53e359b26187abcb0025aeffbb,PodSandboxId:3309bf2556203c8c0193de8cdcb28ef8c8dfd5a80032b5a824cb6bf4e91fdde3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761381023562519757,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nzdhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3cd3e35-b924-472f-9218-233cdce69396,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4570839da44e8b3a2fa055fe684460608e271192fd6c44ac946666456795adcc,PodSandboxId:498a148b1fa089187f01fbd98131c676133adc17c66049a4afb37ef4b8e72b79,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761381012442655834,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16e8a53d09a7fbb3a5e534717096a964,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9cf074fa9b3410489789b50b5dde599ca5b43a9ac50ecb4c836d80d3338c955,PodSandboxId:44ec30afdfe0e08068898f9d8da354d73493db2d38636ad705a4c7ea0ebe7f85,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761381012465811659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 690e6a5b2418f0ca9b6f3c0b414ff231,},Annotations:map[string]str
ing{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:577f1fc6b5aa6028e9ec35cacc97e7a58a56550d7b5e0161bce35c86154ebe5d,PodSandboxId:dfa1994a628a0947a42108e7de8545ea85ae551aaf7fd5d96bdd8fac5fe92db9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761381012407959375,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons
-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe261af37f4a81df64519dd9c14a22d0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:012875c13f0610cc9ba2625b34d3309c83847bdd8a40fda7f3741223f8077fe4,PodSandboxId:18ce3d8a538afdbfe1a0e6410d96f13b142763fa6ff6f6dfb71d21d8f2f0d8fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761381012384741295,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-631036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e721612489f67f85d112768938e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3468038-10e1-4cd2-a28c-7b93cc6e6403 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ba649ef510c32       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago       Running             nginx                     0                   c52e3bd825ede       nginx
	51f2511eac37c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   05b61984d2b06       busybox
	668d1cf35f06c       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago       Running             controller                0                   a441975f9c54e       ingress-nginx-controller-675c5ddd98-mfkds
	e3d9d99a48fa2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   3 minutes ago       Exited              patch                     0                   ed7501f8db956       ingress-nginx-admission-patch-rmrl2
	66b7d625c06cc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   3 minutes ago       Exited              create                    0                   edaf6cd0334de       ingress-nginx-admission-create-29xlb
	c7c32668b06a8       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   bb605a2e21aca       gadget-gg64c
	d454023a5e021       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   b879b682b9e49       kube-ingress-dns-minikube
	721e2f83faa26       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   c1d7a85a1a066       amd-gpu-device-plugin-frvrc
	3949e1e589d6d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   1672536cad4dd       storage-provisioner
	0e76d1b0e3068       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   ac9360fccaa24       coredns-66bc5c9577-wk56k
	1290b3d741f76       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   3309bf2556203       kube-proxy-nzdhm
	a9cf074fa9b34       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   44ec30afdfe0e       kube-scheduler-addons-631036
	4570839da44e8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   498a148b1fa08       etcd-addons-631036
	577f1fc6b5aa6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   dfa1994a628a0       kube-controller-manager-addons-631036
	012875c13f061       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   18ce3d8a538af       kube-apiserver-addons-631036
	
	
	==> coredns [0e76d1b0e30682069f4cd18ed6791c5929f8b1dac8174f741c4d364a982c223e] <==
	[INFO] 10.244.0.8:53549 - 54721 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000080295s
	[INFO] 10.244.0.8:53549 - 17715 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000225624s
	[INFO] 10.244.0.8:53549 - 29869 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000066707s
	[INFO] 10.244.0.8:53549 - 38204 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000212605s
	[INFO] 10.244.0.8:53549 - 47698 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000072373s
	[INFO] 10.244.0.8:53549 - 8203 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000107784s
	[INFO] 10.244.0.8:53549 - 14630 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00024967s
	[INFO] 10.244.0.8:38244 - 15356 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000121357s
	[INFO] 10.244.0.8:38244 - 15636 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088229s
	[INFO] 10.244.0.8:45168 - 53909 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00009541s
	[INFO] 10.244.0.8:45168 - 54212 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008685s
	[INFO] 10.244.0.8:47589 - 2645 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059858s
	[INFO] 10.244.0.8:47589 - 2895 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000062418s
	[INFO] 10.244.0.8:57234 - 17719 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000249377s
	[INFO] 10.244.0.8:57234 - 18140 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00013674s
	[INFO] 10.244.0.23:49170 - 64485 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000836695s
	[INFO] 10.244.0.23:43288 - 8095 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000208942s
	[INFO] 10.244.0.23:47920 - 10132 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113568s
	[INFO] 10.244.0.23:56184 - 54988 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000087744s
	[INFO] 10.244.0.23:44694 - 44820 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000206826s
	[INFO] 10.244.0.23:57625 - 62687 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125151s
	[INFO] 10.244.0.23:58662 - 25286 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001457941s
	[INFO] 10.244.0.23:53651 - 42374 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001146083s
	[INFO] 10.244.0.27:45721 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000543779s
	[INFO] 10.244.0.27:59460 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000204923s
	
	
	==> describe nodes <==
	Name:               addons-631036
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-631036
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=addons-631036
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T08_30_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-631036
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 08:30:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-631036
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 08:35:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 08:33:22 +0000   Sat, 25 Oct 2025 08:30:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 08:33:22 +0000   Sat, 25 Oct 2025 08:30:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 08:33:22 +0000   Sat, 25 Oct 2025 08:30:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 08:33:22 +0000   Sat, 25 Oct 2025 08:30:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.24
	  Hostname:    addons-631036
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 47cdcab0e8ea48b5a70c5c459d82a833
	  System UUID:                47cdcab0-e8ea-48b5-a70c-5c459d82a833
	  Boot ID:                    ddcee597-ce31-4c7f-9e40-372d0f38163a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	  default                     hello-world-app-5d498dc89-m9rs7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gadget                      gadget-gg64c                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-mfkds    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m52s
	  kube-system                 amd-gpu-device-plugin-frvrc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 coredns-66bc5c9577-wk56k                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m1s
	  kube-system                 etcd-addons-631036                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m6s
	  kube-system                 kube-apiserver-addons-631036                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-controller-manager-addons-631036        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-proxy-nzdhm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-scheduler-addons-631036                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m59s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m13s (x8 over 5m13s)  kubelet          Node addons-631036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s (x8 over 5m13s)  kubelet          Node addons-631036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x7 over 5m13s)  kubelet          Node addons-631036 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m6s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m6s                   kubelet          Node addons-631036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s                   kubelet          Node addons-631036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s                   kubelet          Node addons-631036 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m5s                   kubelet          Node addons-631036 status is now: NodeReady
	  Normal  RegisteredNode           5m2s                   node-controller  Node addons-631036 event: Registered Node addons-631036 in Controller
	
	
	==> dmesg <==
	[  +0.000031] kauditd_printk_skb: 369 callbacks suppressed
	[ +10.445317] kauditd_printk_skb: 142 callbacks suppressed
	[Oct25 08:31] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.734058] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.242656] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.908158] kauditd_printk_skb: 20 callbacks suppressed
	[  +4.262615] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.104054] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.711444] kauditd_printk_skb: 141 callbacks suppressed
	[  +0.000286] kauditd_printk_skb: 93 callbacks suppressed
	[  +5.520943] kauditd_printk_skb: 26 callbacks suppressed
	[ +11.856470] kauditd_printk_skb: 38 callbacks suppressed
	[Oct25 08:32] kauditd_printk_skb: 2 callbacks suppressed
	[ +14.007694] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.044518] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.492281] kauditd_printk_skb: 44 callbacks suppressed
	[  +2.460316] kauditd_printk_skb: 150 callbacks suppressed
	[  +0.412509] kauditd_printk_skb: 152 callbacks suppressed
	[  +0.180278] kauditd_printk_skb: 156 callbacks suppressed
	[Oct25 08:33] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.949955] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.431379] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 30 callbacks suppressed
	[  +7.562907] kauditd_printk_skb: 41 callbacks suppressed
	[Oct25 08:35] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [4570839da44e8b3a2fa055fe684460608e271192fd6c44ac946666456795adcc] <==
	{"level":"info","ts":"2025-10-25T08:30:58.602733Z","caller":"traceutil/trace.go:172","msg":"trace[604932350] transaction","detail":"{read_only:false; response_revision:955; number_of_response:1; }","duration":"203.775922ms","start":"2025-10-25T08:30:58.398942Z","end":"2025-10-25T08:30:58.602718Z","steps":["trace[604932350] 'process raft request'  (duration: 203.686371ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:31:05.831027Z","caller":"traceutil/trace.go:172","msg":"trace[1622288266] transaction","detail":"{read_only:false; response_revision:979; number_of_response:1; }","duration":"106.39306ms","start":"2025-10-25T08:31:05.724622Z","end":"2025-10-25T08:31:05.831015Z","steps":["trace[1622288266] 'process raft request'  (duration: 106.303926ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:31:07.414848Z","caller":"traceutil/trace.go:172","msg":"trace[2139928386] transaction","detail":"{read_only:false; response_revision:982; number_of_response:1; }","duration":"151.355643ms","start":"2025-10-25T08:31:07.263479Z","end":"2025-10-25T08:31:07.414835Z","steps":["trace[2139928386] 'process raft request'  (duration: 149.538864ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:31:17.528057Z","caller":"traceutil/trace.go:172","msg":"trace[103179203] transaction","detail":"{read_only:false; response_revision:1005; number_of_response:1; }","duration":"137.753023ms","start":"2025-10-25T08:31:17.390282Z","end":"2025-10-25T08:31:17.528035Z","steps":["trace[103179203] 'process raft request'  (duration: 137.637654ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:31:24.617904Z","caller":"traceutil/trace.go:172","msg":"trace[1806828665] transaction","detail":"{read_only:false; response_revision:1037; number_of_response:1; }","duration":"327.561656ms","start":"2025-10-25T08:31:24.290329Z","end":"2025-10-25T08:31:24.617890Z","steps":["trace[1806828665] 'process raft request'  (duration: 327.430368ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T08:31:24.618695Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-25T08:31:24.290303Z","time spent":"327.675054ms","remote":"127.0.0.1:35560","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3995,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/local-path-storage/local-path-provisioner-648f6765c9-rdtcc\" mod_revision:636 > success:<request_put:<key:\"/registry/pods/local-path-storage/local-path-provisioner-648f6765c9-rdtcc\" value_size:3914 >> failure:<request_range:<key:\"/registry/pods/local-path-storage/local-path-provisioner-648f6765c9-rdtcc\" > >"}
	{"level":"info","ts":"2025-10-25T08:31:30.331568Z","caller":"traceutil/trace.go:172","msg":"trace[168738082] transaction","detail":"{read_only:false; response_revision:1075; number_of_response:1; }","duration":"139.190952ms","start":"2025-10-25T08:31:30.192364Z","end":"2025-10-25T08:31:30.331555Z","steps":["trace[168738082] 'process raft request'  (duration: 139.077776ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:31:45.295270Z","caller":"traceutil/trace.go:172","msg":"trace[1724880962] linearizableReadLoop","detail":"{readStateIndex:1199; appliedIndex:1199; }","duration":"252.970603ms","start":"2025-10-25T08:31:45.042257Z","end":"2025-10-25T08:31:45.295227Z","steps":["trace[1724880962] 'read index received'  (duration: 252.954898ms)","trace[1724880962] 'applied index is now lower than readState.Index'  (duration: 14.368µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T08:31:45.295354Z","caller":"traceutil/trace.go:172","msg":"trace[1337160178] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"263.746854ms","start":"2025-10-25T08:31:45.031596Z","end":"2025-10-25T08:31:45.295342Z","steps":["trace[1337160178] 'process raft request'  (duration: 263.648163ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T08:31:45.295460Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"253.167718ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T08:31:45.295486Z","caller":"traceutil/trace.go:172","msg":"trace[1478105519] range","detail":"{range_begin:/registry/daemonsets; range_end:; response_count:0; response_revision:1162; }","duration":"253.224441ms","start":"2025-10-25T08:31:45.042252Z","end":"2025-10-25T08:31:45.295477Z","steps":["trace[1478105519] 'agreement among raft nodes before linearized reading'  (duration: 253.134425ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T08:31:45.295656Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.966042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T08:31:45.295675Z","caller":"traceutil/trace.go:172","msg":"trace[1318672182] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1162; }","duration":"121.987244ms","start":"2025-10-25T08:31:45.173683Z","end":"2025-10-25T08:31:45.295670Z","steps":["trace[1318672182] 'agreement among raft nodes before linearized reading'  (duration: 121.952488ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T08:31:45.295766Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.989807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T08:31:45.295799Z","caller":"traceutil/trace.go:172","msg":"trace[469332901] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1162; }","duration":"120.022851ms","start":"2025-10-25T08:31:45.175771Z","end":"2025-10-25T08:31:45.295794Z","steps":["trace[469332901] 'agreement among raft nodes before linearized reading'  (duration: 119.979282ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:32:41.045385Z","caller":"traceutil/trace.go:172","msg":"trace[1100788515] transaction","detail":"{read_only:false; response_revision:1417; number_of_response:1; }","duration":"148.074185ms","start":"2025-10-25T08:32:40.897277Z","end":"2025-10-25T08:32:41.045351Z","steps":["trace[1100788515] 'process raft request'  (duration: 147.17463ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T08:32:46.687918Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.338137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T08:32:46.688015Z","caller":"traceutil/trace.go:172","msg":"trace[31958330] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1472; }","duration":"161.447285ms","start":"2025-10-25T08:32:46.526553Z","end":"2025-10-25T08:32:46.688000Z","steps":["trace[31958330] 'range keys from in-memory index tree'  (duration: 161.245951ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T08:32:46.688384Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.06948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/cloud-spanner-emulator-86bd5cbb97-zg648\" limit:1 ","response":"range_response_count:1 size:3721"}
	{"level":"info","ts":"2025-10-25T08:32:46.688412Z","caller":"traceutil/trace.go:172","msg":"trace[1703085354] range","detail":"{range_begin:/registry/pods/default/cloud-spanner-emulator-86bd5cbb97-zg648; range_end:; response_count:1; response_revision:1472; }","duration":"103.104819ms","start":"2025-10-25T08:32:46.585299Z","end":"2025-10-25T08:32:46.688404Z","steps":["trace[1703085354] 'range keys from in-memory index tree'  (duration: 102.992464ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:33:16.030866Z","caller":"traceutil/trace.go:172","msg":"trace[1186067016] transaction","detail":"{read_only:false; response_revision:1671; number_of_response:1; }","duration":"283.477657ms","start":"2025-10-25T08:33:15.747364Z","end":"2025-10-25T08:33:16.030842Z","steps":["trace[1186067016] 'process raft request'  (duration: 283.30274ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-25T08:33:52.416870Z","caller":"traceutil/trace.go:172","msg":"trace[2063075386] linearizableReadLoop","detail":"{readStateIndex:1982; appliedIndex:1982; }","duration":"141.779944ms","start":"2025-10-25T08:33:52.274963Z","end":"2025-10-25T08:33:52.416743Z","steps":["trace[2063075386] 'read index received'  (duration: 141.770696ms)","trace[2063075386] 'applied index is now lower than readState.Index'  (duration: 8.097µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T08:33:52.416905Z","caller":"traceutil/trace.go:172","msg":"trace[406845407] transaction","detail":"{read_only:false; response_revision:1908; number_of_response:1; }","duration":"144.640836ms","start":"2025-10-25T08:33:52.272253Z","end":"2025-10-25T08:33:52.416894Z","steps":["trace[406845407] 'process raft request'  (duration: 144.533598ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T08:33:52.417174Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.087028ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/csi-resizer-role-cfg\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T08:33:52.417204Z","caller":"traceutil/trace.go:172","msg":"trace[641442943] range","detail":"{range_begin:/registry/rolebindings/kube-system/csi-resizer-role-cfg; range_end:; response_count:0; response_revision:1908; }","duration":"142.235113ms","start":"2025-10-25T08:33:52.274959Z","end":"2025-10-25T08:33:52.417194Z","steps":["trace[641442943] 'agreement among raft nodes before linearized reading'  (duration: 142.063737ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:35:24 up 5 min,  0 users,  load average: 0.34, 1.11, 0.61
	Linux addons-631036 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [012875c13f0610cc9ba2625b34d3309c83847bdd8a40fda7f3741223f8077fe4] <==
	 > logger="UnhandledError"
	E1025 08:31:09.085758       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1025 08:32:23.959914       1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:46978: use of closed network connection
	E1025 08:32:24.160264       1 conn.go:339] Error on socket receive: read tcp 192.168.39.24:8443->192.168.39.1:47022: use of closed network connection
	I1025 08:32:33.407518       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.59.0"}
	I1025 08:32:55.830034       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 08:32:56.043023       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.166.38"}
	E1025 08:33:09.194645       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1025 08:33:10.051255       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1025 08:33:23.312242       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1025 08:33:25.222145       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1025 08:33:47.211129       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 08:33:47.211415       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 08:33:47.259184       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 08:33:47.259250       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 08:33:47.418802       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 08:33:47.418860       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 08:33:47.423691       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 08:33:47.423724       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 08:33:47.463556       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 08:33:47.465203       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1025 08:33:48.429877       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1025 08:33:48.464160       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1025 08:33:48.574269       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1025 08:35:22.755312       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.53.92"}
	
	
	==> kube-controller-manager [577f1fc6b5aa6028e9ec35cacc97e7a58a56550d7b5e0161bce35c86154ebe5d] <==
	I1025 08:33:52.477243       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1025 08:33:55.428851       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 08:33:55.430011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 08:33:55.482545       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 08:33:55.483574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 08:33:57.386643       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 08:33:57.387751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 08:34:04.711765       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 08:34:04.712823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 08:34:06.123920       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 08:34:06.125208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 08:34:07.389899       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 08:34:07.391243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 08:34:19.197184       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 08:34:19.198266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 08:34:28.624743       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 08:34:28.626253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 08:34:29.447481       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 08:34:29.448619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 08:35:07.478985       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 08:35:07.480044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 08:35:09.331594       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 08:35:09.332682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 08:35:17.457463       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 08:35:17.458954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [1290b3d741f76073327b7413370692cb5c197f53e359b26187abcb0025aeffbb] <==
	I1025 08:30:24.039769       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 08:30:24.140798       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 08:30:24.140908       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.24"]
	E1025 08:30:24.141245       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 08:30:24.247490       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1025 08:30:24.247567       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 08:30:24.247680       1 server_linux.go:132] "Using iptables Proxier"
	I1025 08:30:24.267477       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 08:30:24.267808       1 server.go:527] "Version info" version="v1.34.1"
	I1025 08:30:24.267822       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 08:30:24.276569       1 config.go:200] "Starting service config controller"
	I1025 08:30:24.276583       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 08:30:24.276613       1 config.go:106] "Starting endpoint slice config controller"
	I1025 08:30:24.276616       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 08:30:24.276632       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 08:30:24.276635       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 08:30:24.284955       1 config.go:309] "Starting node config controller"
	I1025 08:30:24.288174       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 08:30:24.288189       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 08:30:24.378750       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 08:30:24.378826       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 08:30:24.378852       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a9cf074fa9b3410489789b50b5dde599ca5b43a9ac50ecb4c836d80d3338c955] <==
	E1025 08:30:15.382268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 08:30:15.384691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 08:30:15.386494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 08:30:15.386601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 08:30:15.386659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 08:30:15.386737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:30:15.386789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 08:30:15.386841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 08:30:15.386868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:30:15.386981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 08:30:15.387006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 08:30:15.387039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 08:30:15.387155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 08:30:16.221586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 08:30:16.336207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 08:30:16.352288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 08:30:16.371236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 08:30:16.416338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 08:30:16.489780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 08:30:16.657868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:30:16.684884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 08:30:16.712362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 08:30:16.728591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 08:30:16.745856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1025 08:30:19.461614       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 08:33:50 addons-631036 kubelet[1497]: I1025 08:33:50.369163    1497 scope.go:117] "RemoveContainer" containerID="37d0252b798b04a246b16645698f61b5ae1d84a5e82c86e91b2aec2897aa617f"
	Oct 25 08:33:50 addons-631036 kubelet[1497]: I1025 08:33:50.369764    1497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d0252b798b04a246b16645698f61b5ae1d84a5e82c86e91b2aec2897aa617f"} err="failed to get container status \"37d0252b798b04a246b16645698f61b5ae1d84a5e82c86e91b2aec2897aa617f\": rpc error: code = NotFound desc = could not find container \"37d0252b798b04a246b16645698f61b5ae1d84a5e82c86e91b2aec2897aa617f\": container with ID starting with 37d0252b798b04a246b16645698f61b5ae1d84a5e82c86e91b2aec2897aa617f not found: ID does not exist"
	Oct 25 08:33:50 addons-631036 kubelet[1497]: I1025 08:33:50.369799    1497 scope.go:117] "RemoveContainer" containerID="88745cacc76412ac74caf33d8464b1c7827fdb02607ccc10346fbf66839f4cd8"
	Oct 25 08:33:50 addons-631036 kubelet[1497]: I1025 08:33:50.371635    1497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88745cacc76412ac74caf33d8464b1c7827fdb02607ccc10346fbf66839f4cd8"} err="failed to get container status \"88745cacc76412ac74caf33d8464b1c7827fdb02607ccc10346fbf66839f4cd8\": rpc error: code = NotFound desc = could not find container \"88745cacc76412ac74caf33d8464b1c7827fdb02607ccc10346fbf66839f4cd8\": container with ID starting with 88745cacc76412ac74caf33d8464b1c7827fdb02607ccc10346fbf66839f4cd8 not found: ID does not exist"
	Oct 25 08:33:58 addons-631036 kubelet[1497]: E1025 08:33:58.444491    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381238443985777  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:33:58 addons-631036 kubelet[1497]: E1025 08:33:58.444574    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381238443985777  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:34:08 addons-631036 kubelet[1497]: E1025 08:34:08.450268    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381248449840636  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:34:08 addons-631036 kubelet[1497]: E1025 08:34:08.450305    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381248449840636  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:34:18 addons-631036 kubelet[1497]: E1025 08:34:18.452753    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381258452442735  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:34:18 addons-631036 kubelet[1497]: E1025 08:34:18.452792    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381258452442735  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:34:28 addons-631036 kubelet[1497]: E1025 08:34:28.455960    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381268455351715  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:34:28 addons-631036 kubelet[1497]: E1025 08:34:28.456005    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381268455351715  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:34:38 addons-631036 kubelet[1497]: I1025 08:34:38.215263    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:34:38 addons-631036 kubelet[1497]: E1025 08:34:38.458724    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381278458313078  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:34:38 addons-631036 kubelet[1497]: E1025 08:34:38.458753    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381278458313078  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:34:47 addons-631036 kubelet[1497]: I1025 08:34:47.214911    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-frvrc" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:34:48 addons-631036 kubelet[1497]: E1025 08:34:48.460861    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381288460524942  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:34:48 addons-631036 kubelet[1497]: E1025 08:34:48.460900    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381288460524942  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:34:58 addons-631036 kubelet[1497]: E1025 08:34:58.463854    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381298463456569  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:34:58 addons-631036 kubelet[1497]: E1025 08:34:58.463877    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381298463456569  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:35:08 addons-631036 kubelet[1497]: E1025 08:35:08.467128    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381308466808152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:35:08 addons-631036 kubelet[1497]: E1025 08:35:08.467156    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381308466808152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:35:18 addons-631036 kubelet[1497]: E1025 08:35:18.469387    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761381318469013643  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:35:18 addons-631036 kubelet[1497]: E1025 08:35:18.469423    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761381318469013643  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588896}  inodes_used:{value:201}}"
	Oct 25 08:35:22 addons-631036 kubelet[1497]: I1025 08:35:22.721400    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9glk5\" (UniqueName: \"kubernetes.io/projected/621636ff-a5a1-4705-859c-3adbd54cbb54-kube-api-access-9glk5\") pod \"hello-world-app-5d498dc89-m9rs7\" (UID: \"621636ff-a5a1-4705-859c-3adbd54cbb54\") " pod="default/hello-world-app-5d498dc89-m9rs7"
	
	
	==> storage-provisioner [3949e1e589d6de8f336d9514c13a17e20b8e77323cd06ebc8c00867db7c45eb4] <==
	W1025 08:34:58.798105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:00.802241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:00.808130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:02.811900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:02.819940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:04.823024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:04.829011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:06.832858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:06.841784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:08.846760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:08.853211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:10.857516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:10.865674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:12.869163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:12.874853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:14.878413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:14.884930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:16.890260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:16.897427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:18.901646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:18.907521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:20.911468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:20.917737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:22.932888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:22.939701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-631036 -n addons-631036
helpers_test.go:269: (dbg) Run:  kubectl --context addons-631036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-m9rs7 ingress-nginx-admission-create-29xlb ingress-nginx-admission-patch-rmrl2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-631036 describe pod hello-world-app-5d498dc89-m9rs7 ingress-nginx-admission-create-29xlb ingress-nginx-admission-patch-rmrl2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-631036 describe pod hello-world-app-5d498dc89-m9rs7 ingress-nginx-admission-create-29xlb ingress-nginx-admission-patch-rmrl2: exit status 1 (87.816598ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-m9rs7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-631036/192.168.39.24
	Start Time:       Sat, 25 Oct 2025 08:35:22 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9glk5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9glk5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-m9rs7 to addons-631036
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-29xlb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rmrl2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-631036 describe pod hello-world-app-5d498dc89-m9rs7 ingress-nginx-admission-create-29xlb ingress-nginx-admission-patch-rmrl2: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-631036 addons disable ingress-dns --alsologtostderr -v=1: (1.710359123s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-631036 addons disable ingress --alsologtostderr -v=1: (7.795867872s)
--- FAIL: TestAddons/parallel/Ingress (159.19s)

                                                
                                    
x
+
TestPreload (122.29s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-008752 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-008752 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.38038083s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-008752 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-008752 image pull gcr.io/k8s-minikube/busybox: (3.806185543s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-008752
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-008752: (7.250748968s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-008752 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1025 09:22:12.833496    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-008752 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (45.932567811s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-008752 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-25 09:22:24.219845364 +0000 UTC m=+3176.188772955
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-008752 -n test-preload-008752
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-008752 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-008752 logs -n 25: (1.165956069s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-557334 ssh -n multinode-557334-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ ssh     │ multinode-557334 ssh -n multinode-557334 sudo cat /home/docker/cp-test_multinode-557334-m03_multinode-557334.txt                                          │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ cp      │ multinode-557334 cp multinode-557334-m03:/home/docker/cp-test.txt multinode-557334-m02:/home/docker/cp-test_multinode-557334-m03_multinode-557334-m02.txt │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ ssh     │ multinode-557334 ssh -n multinode-557334-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ ssh     │ multinode-557334 ssh -n multinode-557334-m02 sudo cat /home/docker/cp-test_multinode-557334-m03_multinode-557334-m02.txt                                  │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ node    │ multinode-557334 node stop m03                                                                                                                            │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:09 UTC │
	│ node    │ multinode-557334 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:09 UTC │ 25 Oct 25 09:10 UTC │
	│ node    │ list -p multinode-557334                                                                                                                                  │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │                     │
	│ stop    │ -p multinode-557334                                                                                                                                       │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:10 UTC │ 25 Oct 25 09:12 UTC │
	│ start   │ -p multinode-557334 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:12 UTC │ 25 Oct 25 09:14 UTC │
	│ node    │ list -p multinode-557334                                                                                                                                  │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │                     │
	│ node    │ multinode-557334 node delete m03                                                                                                                          │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:14 UTC │
	│ stop    │ multinode-557334 stop                                                                                                                                     │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:14 UTC │ 25 Oct 25 09:17 UTC │
	│ start   │ -p multinode-557334 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:17 UTC │ 25 Oct 25 09:19 UTC │
	│ node    │ list -p multinode-557334                                                                                                                                  │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │                     │
	│ start   │ -p multinode-557334-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-557334-m02 │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │                     │
	│ start   │ -p multinode-557334-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-557334-m03 │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:20 UTC │
	│ node    │ add -p multinode-557334                                                                                                                                   │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:20 UTC │                     │
	│ delete  │ -p multinode-557334-m03                                                                                                                                   │ multinode-557334-m03 │ jenkins │ v1.37.0 │ 25 Oct 25 09:20 UTC │ 25 Oct 25 09:20 UTC │
	│ delete  │ -p multinode-557334                                                                                                                                       │ multinode-557334     │ jenkins │ v1.37.0 │ 25 Oct 25 09:20 UTC │ 25 Oct 25 09:20 UTC │
	│ start   │ -p test-preload-008752 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-008752  │ jenkins │ v1.37.0 │ 25 Oct 25 09:20 UTC │ 25 Oct 25 09:21 UTC │
	│ image   │ test-preload-008752 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-008752  │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │ 25 Oct 25 09:21 UTC │
	│ stop    │ -p test-preload-008752                                                                                                                                    │ test-preload-008752  │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │ 25 Oct 25 09:21 UTC │
	│ start   │ -p test-preload-008752 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-008752  │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │ 25 Oct 25 09:22 UTC │
	│ image   │ test-preload-008752 image list                                                                                                                            │ test-preload-008752  │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:22 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:21:38
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:21:38.140713   32993 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:21:38.140847   32993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:21:38.140859   32993 out.go:374] Setting ErrFile to fd 2...
	I1025 09:21:38.140864   32993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:21:38.141089   32993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
	I1025 09:21:38.141708   32993 out.go:368] Setting JSON to false
	I1025 09:21:38.142777   32993 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3848,"bootTime":1761380250,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:21:38.142869   32993 start.go:141] virtualization: kvm guest
	I1025 09:21:38.145095   32993 out.go:179] * [test-preload-008752] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:21:38.147201   32993 notify.go:220] Checking for updates...
	I1025 09:21:38.147266   32993 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:21:38.148778   32993 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:21:38.150147   32993 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	I1025 09:21:38.151893   32993 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 09:21:38.153485   32993 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:21:38.155138   32993 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:21:38.157017   32993 config.go:182] Loaded profile config "test-preload-008752": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1025 09:21:38.158961   32993 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1025 09:21:38.160495   32993 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:21:38.198760   32993 out.go:179] * Using the kvm2 driver based on existing profile
	I1025 09:21:38.200283   32993 start.go:305] selected driver: kvm2
	I1025 09:21:38.200305   32993 start.go:925] validating driver "kvm2" against &{Name:test-preload-008752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-008752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:21:38.200425   32993 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:21:38.201503   32993 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:21:38.201532   32993 cni.go:84] Creating CNI manager for ""
	I1025 09:21:38.201594   32993 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 09:21:38.201651   32993 start.go:349] cluster config:
	{Name:test-preload-008752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-008752 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:21:38.201822   32993 iso.go:125] acquiring lock: {Name:mk56ae07ef3e2fe29ebca77d84768cf173c5b3d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:21:38.203764   32993 out.go:179] * Starting "test-preload-008752" primary control-plane node in "test-preload-008752" cluster
	I1025 09:21:38.205261   32993 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1025 09:21:38.226085   32993 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1025 09:21:38.226123   32993 cache.go:58] Caching tarball of preloaded images
	I1025 09:21:38.226322   32993 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1025 09:21:38.228632   32993 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1025 09:21:38.230197   32993 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1025 09:21:38.256862   32993 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1025 09:21:38.256928   32993 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1025 09:21:41.487288   32993 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1025 09:21:41.487406   32993 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/config.json ...
	I1025 09:21:41.487634   32993 start.go:360] acquireMachinesLock for test-preload-008752: {Name:mk307ae3583c207a47794987d4930662cf65d417 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 09:21:41.487699   32993 start.go:364] duration metric: took 43.972µs to acquireMachinesLock for "test-preload-008752"
	I1025 09:21:41.487714   32993 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:21:41.487721   32993 fix.go:54] fixHost starting: 
	I1025 09:21:41.489808   32993 fix.go:112] recreateIfNeeded on test-preload-008752: state=Stopped err=<nil>
	W1025 09:21:41.489837   32993 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 09:21:41.492018   32993 out.go:252] * Restarting existing kvm2 VM for "test-preload-008752" ...
	I1025 09:21:41.492086   32993 main.go:141] libmachine: starting domain...
	I1025 09:21:41.492097   32993 main.go:141] libmachine: ensuring networks are active...
	I1025 09:21:41.492934   32993 main.go:141] libmachine: Ensuring network default is active
	I1025 09:21:41.493454   32993 main.go:141] libmachine: Ensuring network mk-test-preload-008752 is active
	I1025 09:21:41.493849   32993 main.go:141] libmachine: getting domain XML...
	I1025 09:21:41.494990   32993 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-008752</name>
	  <uuid>1dd23be2-6ae7-423a-856f-1ccf3e6e7fd7</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/test-preload-008752/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/test-preload-008752/test-preload-008752.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:b2:d8:c5'/>
	      <source network='mk-test-preload-008752'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:a7:0e:81'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1025 09:21:42.785916   32993 main.go:141] libmachine: waiting for domain to start...
	I1025 09:21:42.787296   32993 main.go:141] libmachine: domain is now running
	I1025 09:21:42.787315   32993 main.go:141] libmachine: waiting for IP...
	I1025 09:21:42.788013   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:42.788668   32993 main.go:141] libmachine: domain test-preload-008752 has current primary IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:42.788683   32993 main.go:141] libmachine: found domain IP: 192.168.39.135
	I1025 09:21:42.788690   32993 main.go:141] libmachine: reserving static IP address...
	I1025 09:21:42.789167   32993 main.go:141] libmachine: found host DHCP lease matching {name: "test-preload-008752", mac: "52:54:00:b2:d8:c5", ip: "192.168.39.135"} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:20:40 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:42.789194   32993 main.go:141] libmachine: skip adding static IP to network mk-test-preload-008752 - found existing host DHCP lease matching {name: "test-preload-008752", mac: "52:54:00:b2:d8:c5", ip: "192.168.39.135"}
	I1025 09:21:42.789206   32993 main.go:141] libmachine: reserved static IP address 192.168.39.135 for domain test-preload-008752
	I1025 09:21:42.789213   32993 main.go:141] libmachine: waiting for SSH...
	I1025 09:21:42.789221   32993 main.go:141] libmachine: Getting to WaitForSSH function...
	I1025 09:21:42.792225   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:42.793561   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:20:40 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:42.793610   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:42.793904   32993 main.go:141] libmachine: Using SSH client type: native
	I1025 09:21:42.794203   32993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1025 09:21:42.794219   32993 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1025 09:21:45.896536   32993 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.135:22: connect: no route to host
	I1025 09:21:51.976537   32993 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.135:22: connect: no route to host
	I1025 09:21:55.085265   32993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:21:55.089492   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.090000   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:55.090033   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.090374   32993 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/config.json ...
	I1025 09:21:55.090612   32993 machine.go:93] provisionDockerMachine start ...
	I1025 09:21:55.093526   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.094010   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:55.094073   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.094276   32993 main.go:141] libmachine: Using SSH client type: native
	I1025 09:21:55.094482   32993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1025 09:21:55.094495   32993 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:21:55.201412   32993 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 09:21:55.201447   32993 buildroot.go:166] provisioning hostname "test-preload-008752"
	I1025 09:21:55.204537   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.204985   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:55.205025   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.205313   32993 main.go:141] libmachine: Using SSH client type: native
	I1025 09:21:55.205562   32993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1025 09:21:55.205576   32993 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-008752 && echo "test-preload-008752" | sudo tee /etc/hostname
	I1025 09:21:55.342481   32993 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-008752
	
	I1025 09:21:55.345488   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.345938   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:55.345968   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.346129   32993 main.go:141] libmachine: Using SSH client type: native
	I1025 09:21:55.346380   32993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1025 09:21:55.346399   32993 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-008752' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-008752/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-008752' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:21:55.467591   32993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:21:55.467620   32993 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5973/.minikube}
	I1025 09:21:55.467642   32993 buildroot.go:174] setting up certificates
	I1025 09:21:55.467651   32993 provision.go:84] configureAuth start
	I1025 09:21:55.471028   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.471545   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:55.471576   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.473905   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.474334   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:55.474355   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.474498   32993 provision.go:143] copyHostCerts
	I1025 09:21:55.474565   32993 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5973/.minikube/ca.pem, removing ...
	I1025 09:21:55.474583   32993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5973/.minikube/ca.pem
	I1025 09:21:55.474663   32993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5973/.minikube/ca.pem (1078 bytes)
	I1025 09:21:55.474781   32993 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5973/.minikube/cert.pem, removing ...
	I1025 09:21:55.474800   32993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5973/.minikube/cert.pem
	I1025 09:21:55.474835   32993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5973/.minikube/cert.pem (1123 bytes)
	I1025 09:21:55.474895   32993 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5973/.minikube/key.pem, removing ...
	I1025 09:21:55.474906   32993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5973/.minikube/key.pem
	I1025 09:21:55.474930   32993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5973/.minikube/key.pem (1679 bytes)
	I1025 09:21:55.474980   32993 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca-key.pem org=jenkins.test-preload-008752 san=[127.0.0.1 192.168.39.135 localhost minikube test-preload-008752]
	I1025 09:21:55.618308   32993 provision.go:177] copyRemoteCerts
	I1025 09:21:55.618367   32993 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:21:55.620895   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.621550   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:55.621578   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.621843   32993 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/test-preload-008752/id_rsa Username:docker}
	I1025 09:21:55.705544   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:21:55.737685   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 09:21:55.769167   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:21:55.801223   32993 provision.go:87] duration metric: took 333.558767ms to configureAuth
	I1025 09:21:55.801288   32993 buildroot.go:189] setting minikube options for container-runtime
	I1025 09:21:55.801467   32993 config.go:182] Loaded profile config "test-preload-008752": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1025 09:21:55.804873   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.805315   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:55.805341   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:55.805605   32993 main.go:141] libmachine: Using SSH client type: native
	I1025 09:21:55.805822   32993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1025 09:21:55.805840   32993 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:21:56.055118   32993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:21:56.055148   32993 machine.go:96] duration metric: took 964.521545ms to provisionDockerMachine
	I1025 09:21:56.055177   32993 start.go:293] postStartSetup for "test-preload-008752" (driver="kvm2")
	I1025 09:21:56.055188   32993 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:21:56.055290   32993 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:21:56.058163   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:56.058566   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:56.058598   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:56.058801   32993 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/test-preload-008752/id_rsa Username:docker}
	I1025 09:21:56.142779   32993 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:21:56.147827   32993 info.go:137] Remote host: Buildroot 2025.02
	I1025 09:21:56.147854   32993 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5973/.minikube/addons for local assets ...
	I1025 09:21:56.147942   32993 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5973/.minikube/files for local assets ...
	I1025 09:21:56.148087   32993 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem -> 98812.pem in /etc/ssl/certs
	I1025 09:21:56.148229   32993 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:21:56.161894   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem --> /etc/ssl/certs/98812.pem (1708 bytes)
	I1025 09:21:56.192644   32993 start.go:296] duration metric: took 137.447392ms for postStartSetup
	I1025 09:21:56.192686   32993 fix.go:56] duration metric: took 14.704965267s for fixHost
	I1025 09:21:56.195954   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:56.196690   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:56.196722   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:56.196942   32993 main.go:141] libmachine: Using SSH client type: native
	I1025 09:21:56.197166   32993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1025 09:21:56.197179   32993 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 09:21:56.307209   32993 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761384116.260302443
	
	I1025 09:21:56.307266   32993 fix.go:216] guest clock: 1761384116.260302443
	I1025 09:21:56.307276   32993 fix.go:229] Guest: 2025-10-25 09:21:56.260302443 +0000 UTC Remote: 2025-10-25 09:21:56.192690085 +0000 UTC m=+18.100493567 (delta=67.612358ms)
	I1025 09:21:56.307316   32993 fix.go:200] guest clock delta is within tolerance: 67.612358ms
	I1025 09:21:56.307335   32993 start.go:83] releasing machines lock for "test-preload-008752", held for 14.819626412s
	I1025 09:21:56.310717   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:56.311168   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:56.311195   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:56.311917   32993 ssh_runner.go:195] Run: cat /version.json
	I1025 09:21:56.311948   32993 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:21:56.315161   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:56.315291   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:56.315613   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:56.315638   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:56.315703   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:56.315733   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:56.315784   32993 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/test-preload-008752/id_rsa Username:docker}
	I1025 09:21:56.316007   32993 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/test-preload-008752/id_rsa Username:docker}
	I1025 09:21:56.416262   32993 ssh_runner.go:195] Run: systemctl --version
	I1025 09:21:56.422800   32993 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:21:56.583410   32993 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:21:56.590092   32993 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:21:56.590173   32993 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:21:56.609969   32993 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:21:56.610001   32993 start.go:495] detecting cgroup driver to use...
	I1025 09:21:56.610068   32993 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:21:56.630402   32993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:21:56.648134   32993 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:21:56.648208   32993 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:21:56.666638   32993 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:21:56.683540   32993 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:21:56.846962   32993 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:21:57.067936   32993 docker.go:234] disabling docker service ...
	I1025 09:21:57.068031   32993 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:21:57.085479   32993 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:21:57.101192   32993 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:21:57.257212   32993 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:21:57.399486   32993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:21:57.415653   32993 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:21:57.439316   32993 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1025 09:21:57.439384   32993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:21:57.452382   32993 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:21:57.452444   32993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:21:57.466863   32993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:21:57.480169   32993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:21:57.493459   32993 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:21:57.506982   32993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:21:57.520039   32993 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:21:57.542299   32993 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:21:57.555794   32993 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:21:57.567629   32993 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 09:21:57.567707   32993 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 09:21:57.589192   32993 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:21:57.601745   32993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:21:57.744146   32993 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:21:57.860781   32993 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:21:57.860882   32993 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:21:57.866551   32993 start.go:563] Will wait 60s for crictl version
	I1025 09:21:57.866625   32993 ssh_runner.go:195] Run: which crictl
	I1025 09:21:57.871265   32993 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 09:21:57.915638   32993 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 09:21:57.915738   32993 ssh_runner.go:195] Run: crio --version
	I1025 09:21:57.947299   32993 ssh_runner.go:195] Run: crio --version
	I1025 09:21:57.980599   32993 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1025 09:21:57.985671   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:57.986264   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:21:57.986295   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:21:57.986496   32993 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1025 09:21:57.991541   32993 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:21:58.007790   32993 kubeadm.go:883] updating cluster {Name:test-preload-008752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-008752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:21:58.007938   32993 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1025 09:21:58.007986   32993 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:21:58.048375   32993 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1025 09:21:58.048446   32993 ssh_runner.go:195] Run: which lz4
	I1025 09:21:58.053495   32993 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 09:21:58.058709   32993 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 09:21:58.058754   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1025 09:21:59.553929   32993 crio.go:462] duration metric: took 1.500468641s to copy over tarball
	I1025 09:21:59.554010   32993 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 09:22:01.307732   32993 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.753694446s)
	I1025 09:22:01.307769   32993 crio.go:469] duration metric: took 1.753807783s to extract the tarball
	I1025 09:22:01.307777   32993 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 09:22:01.349129   32993 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:22:01.397711   32993 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:22:01.397738   32993 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:22:01.397748   32993 kubeadm.go:934] updating node { 192.168.39.135 8443 v1.32.0 crio true true} ...
	I1025 09:22:01.397848   32993 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-008752 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-008752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:22:01.397928   32993 ssh_runner.go:195] Run: crio config
	I1025 09:22:01.446279   32993 cni.go:84] Creating CNI manager for ""
	I1025 09:22:01.446304   32993 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 09:22:01.446327   32993 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:22:01.446347   32993 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.135 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-008752 NodeName:test-preload-008752 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:22:01.446452   32993 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-008752"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.135"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.135"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:22:01.446507   32993 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1025 09:22:01.459030   32993 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:22:01.459099   32993 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:22:01.471546   32993 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1025 09:22:01.493620   32993 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:22:01.514991   32993 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1025 09:22:01.536814   32993 ssh_runner.go:195] Run: grep 192.168.39.135	control-plane.minikube.internal$ /etc/hosts
	I1025 09:22:01.541496   32993 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:22:01.556496   32993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:22:01.703506   32993 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:22:01.739483   32993 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752 for IP: 192.168.39.135
	I1025 09:22:01.739506   32993 certs.go:195] generating shared ca certs ...
	I1025 09:22:01.739523   32993 certs.go:227] acquiring lock for ca certs: {Name:mke8d6ba2f98d813f76972dbfee9daa2e84822df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:22:01.739662   32993 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key
	I1025 09:22:01.739702   32993 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key
	I1025 09:22:01.739709   32993 certs.go:257] generating profile certs ...
	I1025 09:22:01.739780   32993 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/client.key
	I1025 09:22:01.739848   32993 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/apiserver.key.cbd8ffb4
	I1025 09:22:01.739882   32993 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/proxy-client.key
	I1025 09:22:01.739978   32993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/9881.pem (1338 bytes)
	W1025 09:22:01.740007   32993 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5973/.minikube/certs/9881_empty.pem, impossibly tiny 0 bytes
	I1025 09:22:01.740013   32993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:22:01.740033   32993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:22:01.740052   32993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:22:01.740096   32993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/key.pem (1679 bytes)
	I1025 09:22:01.740135   32993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem (1708 bytes)
	I1025 09:22:01.740673   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:22:01.775511   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 09:22:01.822789   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:22:01.855674   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1025 09:22:01.887989   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 09:22:01.919983   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:22:01.950939   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:22:01.983043   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:22:02.015551   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/certs/9881.pem --> /usr/share/ca-certificates/9881.pem (1338 bytes)
	I1025 09:22:02.047662   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem --> /usr/share/ca-certificates/98812.pem (1708 bytes)
	I1025 09:22:02.079345   32993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:22:02.111691   32993 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:22:02.134540   32993 ssh_runner.go:195] Run: openssl version
	I1025 09:22:02.141550   32993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9881.pem && ln -fs /usr/share/ca-certificates/9881.pem /etc/ssl/certs/9881.pem"
	I1025 09:22:02.155445   32993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9881.pem
	I1025 09:22:02.161117   32993 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/9881.pem
	I1025 09:22:02.161188   32993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9881.pem
	I1025 09:22:02.168935   32993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9881.pem /etc/ssl/certs/51391683.0"
	I1025 09:22:02.182931   32993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98812.pem && ln -fs /usr/share/ca-certificates/98812.pem /etc/ssl/certs/98812.pem"
	I1025 09:22:02.197487   32993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98812.pem
	I1025 09:22:02.203065   32993 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/98812.pem
	I1025 09:22:02.203137   32993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98812.pem
	I1025 09:22:02.210761   32993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98812.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:22:02.224270   32993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:22:02.238270   32993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:22:02.243603   32993 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:22:02.243664   32993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:22:02.250971   32993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:22:02.264910   32993 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:22:02.270622   32993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:22:02.278455   32993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:22:02.286127   32993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:22:02.294230   32993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:22:02.302149   32993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:22:02.309942   32993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:22:02.317912   32993 kubeadm.go:400] StartCluster: {Name:test-preload-008752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-008752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:22:02.317983   32993 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:22:02.318027   32993 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:22:02.359126   32993 cri.go:89] found id: ""
	I1025 09:22:02.359193   32993 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:22:02.371809   32993 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:22:02.371829   32993 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:22:02.371875   32993 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:22:02.385261   32993 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:22:02.385697   32993 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-008752" does not appear in /home/jenkins/minikube-integration/21796-5973/kubeconfig
	I1025 09:22:02.385826   32993 kubeconfig.go:62] /home/jenkins/minikube-integration/21796-5973/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-008752" cluster setting kubeconfig missing "test-preload-008752" context setting]
	I1025 09:22:02.386124   32993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/kubeconfig: {Name:mk7395a01001bce28a4f8d18a1c883ac67624078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:22:02.386686   32993 kapi.go:59] client config for test-preload-008752: &rest.Config{Host:"https://192.168.39.135:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/client.crt", KeyFile:"/home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/client.key", CAFile:"/home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:22:02.387088   32993 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 09:22:02.387104   32993 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 09:22:02.387109   32993 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 09:22:02.387113   32993 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 09:22:02.387117   32993 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 09:22:02.387463   32993 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:22:02.399690   32993 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.135
	I1025 09:22:02.399725   32993 kubeadm.go:1160] stopping kube-system containers ...
	I1025 09:22:02.399736   32993 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1025 09:22:02.399792   32993 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:22:02.441674   32993 cri.go:89] found id: ""
	I1025 09:22:02.441759   32993 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 09:22:02.469660   32993 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:22:02.482388   32993 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:22:02.482418   32993 kubeadm.go:157] found existing configuration files:
	
	I1025 09:22:02.482470   32993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:22:02.493742   32993 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:22:02.493819   32993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:22:02.506314   32993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:22:02.517674   32993 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:22:02.517759   32993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:22:02.529685   32993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:22:02.541118   32993 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:22:02.541200   32993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:22:02.553288   32993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:22:02.564505   32993 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:22:02.564578   32993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:22:02.577013   32993 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:22:02.589553   32993 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:22:02.648110   32993 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:22:03.606014   32993 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:22:03.849182   32993 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:22:03.917170   32993 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:22:04.003937   32993 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:22:04.004031   32993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:22:04.504527   32993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:22:05.004125   32993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:22:05.504223   32993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:22:06.004801   32993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:22:06.040933   32993 api_server.go:72] duration metric: took 2.037015895s to wait for apiserver process to appear ...
	I1025 09:22:06.040960   32993 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:22:06.040977   32993 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I1025 09:22:08.364691   32993 api_server.go:279] https://192.168.39.135:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 09:22:08.364726   32993 api_server.go:103] status: https://192.168.39.135:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 09:22:08.364746   32993 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I1025 09:22:08.412999   32993 api_server.go:279] https://192.168.39.135:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 09:22:08.413038   32993 api_server.go:103] status: https://192.168.39.135:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 09:22:08.541443   32993 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I1025 09:22:08.546442   32993 api_server.go:279] https://192.168.39.135:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:22:08.546475   32993 api_server.go:103] status: https://192.168.39.135:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:22:09.041174   32993 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I1025 09:22:09.055566   32993 api_server.go:279] https://192.168.39.135:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:22:09.055598   32993 api_server.go:103] status: https://192.168.39.135:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:22:09.541342   32993 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I1025 09:22:09.555933   32993 api_server.go:279] https://192.168.39.135:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:22:09.555967   32993 api_server.go:103] status: https://192.168.39.135:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:22:10.041673   32993 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I1025 09:22:10.045962   32993 api_server.go:279] https://192.168.39.135:8443/healthz returned 200:
	ok
	I1025 09:22:10.052744   32993 api_server.go:141] control plane version: v1.32.0
	I1025 09:22:10.052772   32993 api_server.go:131] duration metric: took 4.011805795s to wait for apiserver health ...
	I1025 09:22:10.052781   32993 cni.go:84] Creating CNI manager for ""
	I1025 09:22:10.052787   32993 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 09:22:10.054846   32993 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 09:22:10.056378   32993 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 09:22:10.069736   32993 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 09:22:10.106673   32993 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:22:10.112295   32993 system_pods.go:59] 8 kube-system pods found
	I1025 09:22:10.112333   32993 system_pods.go:61] "coredns-668d6bf9bc-bxgdm" [c861e20f-5d5b-469c-92c4-d757407d2c99] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:22:10.112341   32993 system_pods.go:61] "coredns-668d6bf9bc-pdk94" [49e32c1f-1d63-41fb-99a5-dbd7728bb84d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:22:10.112348   32993 system_pods.go:61] "etcd-test-preload-008752" [bbbb9773-6e6b-4b39-83e0-f35a712d0105] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:22:10.112355   32993 system_pods.go:61] "kube-apiserver-test-preload-008752" [9e56ab26-29aa-4c9f-85f8-618feb05d298] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:22:10.112361   32993 system_pods.go:61] "kube-controller-manager-test-preload-008752" [0ab112d0-9340-4a29-bdef-315f2dded7ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:22:10.112366   32993 system_pods.go:61] "kube-proxy-4qzgc" [6925f8ea-222d-4a19-85b3-ed3dc1453713] Running
	I1025 09:22:10.112372   32993 system_pods.go:61] "kube-scheduler-test-preload-008752" [e2df4829-aff3-4219-8f48-f66c87f755e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:22:10.112377   32993 system_pods.go:61] "storage-provisioner" [cad4d99c-2c55-4e37-8275-d3b54d9e48f9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:22:10.112383   32993 system_pods.go:74] duration metric: took 5.687166ms to wait for pod list to return data ...
	I1025 09:22:10.112391   32993 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:22:10.116457   32993 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 09:22:10.116488   32993 node_conditions.go:123] node cpu capacity is 2
	I1025 09:22:10.116501   32993 node_conditions.go:105] duration metric: took 4.105694ms to run NodePressure ...
	I1025 09:22:10.116563   32993 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:22:10.385276   32993 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1025 09:22:10.388905   32993 kubeadm.go:743] kubelet initialised
	I1025 09:22:10.388940   32993 kubeadm.go:744] duration metric: took 3.628826ms waiting for restarted kubelet to initialise ...
	I1025 09:22:10.388962   32993 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:22:10.406827   32993 ops.go:34] apiserver oom_adj: -16
	I1025 09:22:10.406854   32993 kubeadm.go:601] duration metric: took 8.035019291s to restartPrimaryControlPlane
	I1025 09:22:10.406863   32993 kubeadm.go:402] duration metric: took 8.088959477s to StartCluster
	I1025 09:22:10.406878   32993 settings.go:142] acquiring lock: {Name:mkceaa31f1735308eeec0f271d1ae2367ed96dc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:22:10.406943   32993 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-5973/kubeconfig
	I1025 09:22:10.407654   32993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/kubeconfig: {Name:mk7395a01001bce28a4f8d18a1c883ac67624078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:22:10.407969   32993 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:22:10.408049   32993 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:22:10.408135   32993 addons.go:69] Setting storage-provisioner=true in profile "test-preload-008752"
	I1025 09:22:10.408150   32993 addons.go:238] Setting addon storage-provisioner=true in "test-preload-008752"
	W1025 09:22:10.408156   32993 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:22:10.408180   32993 host.go:66] Checking if "test-preload-008752" exists ...
	I1025 09:22:10.408196   32993 addons.go:69] Setting default-storageclass=true in profile "test-preload-008752"
	I1025 09:22:10.408221   32993 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-008752"
	I1025 09:22:10.408253   32993 config.go:182] Loaded profile config "test-preload-008752": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1025 09:22:10.409961   32993 out.go:179] * Verifying Kubernetes components...
	I1025 09:22:10.410784   32993 kapi.go:59] client config for test-preload-008752: &rest.Config{Host:"https://192.168.39.135:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/client.crt", KeyFile:"/home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/client.key", CAFile:"/home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:22:10.411069   32993 addons.go:238] Setting addon default-storageclass=true in "test-preload-008752"
	W1025 09:22:10.411094   32993 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:22:10.411119   32993 host.go:66] Checking if "test-preload-008752" exists ...
	I1025 09:22:10.411657   32993 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:22:10.411731   32993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:22:10.412789   32993 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:22:10.412810   32993 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:22:10.413037   32993 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:22:10.413053   32993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:22:10.416030   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:22:10.416390   32993 main.go:141] libmachine: domain test-preload-008752 has defined MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:22:10.416473   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:22:10.416508   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:22:10.416665   32993 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/test-preload-008752/id_rsa Username:docker}
	I1025 09:22:10.416909   32993 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:d8:c5", ip: ""} in network mk-test-preload-008752: {Iface:virbr1 ExpiryTime:2025-10-25 10:21:53 +0000 UTC Type:0 Mac:52:54:00:b2:d8:c5 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-008752 Clientid:01:52:54:00:b2:d8:c5}
	I1025 09:22:10.416947   32993 main.go:141] libmachine: domain test-preload-008752 has defined IP address 192.168.39.135 and MAC address 52:54:00:b2:d8:c5 in network mk-test-preload-008752
	I1025 09:22:10.417148   32993 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/test-preload-008752/id_rsa Username:docker}
	I1025 09:22:10.655476   32993 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:22:10.681014   32993 node_ready.go:35] waiting up to 6m0s for node "test-preload-008752" to be "Ready" ...
	I1025 09:22:10.728311   32993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:22:10.908559   32993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:22:11.514542   32993 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 09:22:11.516301   32993 addons.go:514] duration metric: took 1.108256711s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1025 09:22:12.684914   32993 node_ready.go:57] node "test-preload-008752" has "Ready":"False" status (will retry)
	W1025 09:22:15.185429   32993 node_ready.go:57] node "test-preload-008752" has "Ready":"False" status (will retry)
	W1025 09:22:17.187028   32993 node_ready.go:57] node "test-preload-008752" has "Ready":"False" status (will retry)
	I1025 09:22:18.684980   32993 node_ready.go:49] node "test-preload-008752" is "Ready"
	I1025 09:22:18.685014   32993 node_ready.go:38] duration metric: took 8.003925363s for node "test-preload-008752" to be "Ready" ...
	I1025 09:22:18.685033   32993 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:22:18.685114   32993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:22:18.708395   32993 api_server.go:72] duration metric: took 8.300389222s to wait for apiserver process to appear ...
	I1025 09:22:18.708425   32993 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:22:18.708449   32993 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I1025 09:22:18.714154   32993 api_server.go:279] https://192.168.39.135:8443/healthz returned 200:
	ok
	I1025 09:22:18.715315   32993 api_server.go:141] control plane version: v1.32.0
	I1025 09:22:18.715337   32993 api_server.go:131] duration metric: took 6.905513ms to wait for apiserver health ...
	I1025 09:22:18.715345   32993 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:22:18.719358   32993 system_pods.go:59] 8 kube-system pods found
	I1025 09:22:18.719386   32993 system_pods.go:61] "coredns-668d6bf9bc-bxgdm" [c861e20f-5d5b-469c-92c4-d757407d2c99] Running
	I1025 09:22:18.719391   32993 system_pods.go:61] "coredns-668d6bf9bc-pdk94" [49e32c1f-1d63-41fb-99a5-dbd7728bb84d] Running
	I1025 09:22:18.719398   32993 system_pods.go:61] "etcd-test-preload-008752" [bbbb9773-6e6b-4b39-83e0-f35a712d0105] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:22:18.719410   32993 system_pods.go:61] "kube-apiserver-test-preload-008752" [9e56ab26-29aa-4c9f-85f8-618feb05d298] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:22:18.719417   32993 system_pods.go:61] "kube-controller-manager-test-preload-008752" [0ab112d0-9340-4a29-bdef-315f2dded7ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:22:18.719421   32993 system_pods.go:61] "kube-proxy-4qzgc" [6925f8ea-222d-4a19-85b3-ed3dc1453713] Running
	I1025 09:22:18.719424   32993 system_pods.go:61] "kube-scheduler-test-preload-008752" [e2df4829-aff3-4219-8f48-f66c87f755e3] Running
	I1025 09:22:18.719428   32993 system_pods.go:61] "storage-provisioner" [cad4d99c-2c55-4e37-8275-d3b54d9e48f9] Running
	I1025 09:22:18.719434   32993 system_pods.go:74] duration metric: took 4.083649ms to wait for pod list to return data ...
	I1025 09:22:18.719441   32993 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:22:18.722479   32993 default_sa.go:45] found service account: "default"
	I1025 09:22:18.722508   32993 default_sa.go:55] duration metric: took 3.0593ms for default service account to be created ...
	I1025 09:22:18.722517   32993 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:22:18.725385   32993 system_pods.go:86] 8 kube-system pods found
	I1025 09:22:18.725407   32993 system_pods.go:89] "coredns-668d6bf9bc-bxgdm" [c861e20f-5d5b-469c-92c4-d757407d2c99] Running
	I1025 09:22:18.725412   32993 system_pods.go:89] "coredns-668d6bf9bc-pdk94" [49e32c1f-1d63-41fb-99a5-dbd7728bb84d] Running
	I1025 09:22:18.725419   32993 system_pods.go:89] "etcd-test-preload-008752" [bbbb9773-6e6b-4b39-83e0-f35a712d0105] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:22:18.725425   32993 system_pods.go:89] "kube-apiserver-test-preload-008752" [9e56ab26-29aa-4c9f-85f8-618feb05d298] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:22:18.725433   32993 system_pods.go:89] "kube-controller-manager-test-preload-008752" [0ab112d0-9340-4a29-bdef-315f2dded7ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:22:18.725437   32993 system_pods.go:89] "kube-proxy-4qzgc" [6925f8ea-222d-4a19-85b3-ed3dc1453713] Running
	I1025 09:22:18.725442   32993 system_pods.go:89] "kube-scheduler-test-preload-008752" [e2df4829-aff3-4219-8f48-f66c87f755e3] Running
	I1025 09:22:18.725445   32993 system_pods.go:89] "storage-provisioner" [cad4d99c-2c55-4e37-8275-d3b54d9e48f9] Running
	I1025 09:22:18.725451   32993 system_pods.go:126] duration metric: took 2.929803ms to wait for k8s-apps to be running ...
	I1025 09:22:18.725457   32993 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:22:18.725497   32993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:22:18.743707   32993 system_svc.go:56] duration metric: took 18.238645ms WaitForService to wait for kubelet
	I1025 09:22:18.743743   32993 kubeadm.go:586] duration metric: took 8.33573957s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:22:18.743761   32993 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:22:18.746956   32993 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 09:22:18.746982   32993 node_conditions.go:123] node cpu capacity is 2
	I1025 09:22:18.746995   32993 node_conditions.go:105] duration metric: took 3.227941ms to run NodePressure ...
	I1025 09:22:18.747008   32993 start.go:241] waiting for startup goroutines ...
	I1025 09:22:18.747019   32993 start.go:246] waiting for cluster config update ...
	I1025 09:22:18.747032   32993 start.go:255] writing updated cluster config ...
	I1025 09:22:18.747321   32993 ssh_runner.go:195] Run: rm -f paused
	I1025 09:22:18.752677   32993 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:22:18.753127   32993 kapi.go:59] client config for test-preload-008752: &rest.Config{Host:"https://192.168.39.135:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/client.crt", KeyFile:"/home/jenkins/minikube-integration/21796-5973/.minikube/profiles/test-preload-008752/client.key", CAFile:"/home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:22:18.756528   32993 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-bxgdm" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:22:18.761113   32993 pod_ready.go:94] pod "coredns-668d6bf9bc-bxgdm" is "Ready"
	I1025 09:22:18.761138   32993 pod_ready.go:86] duration metric: took 4.584249ms for pod "coredns-668d6bf9bc-bxgdm" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:22:18.761147   32993 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-pdk94" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:22:18.764993   32993 pod_ready.go:94] pod "coredns-668d6bf9bc-pdk94" is "Ready"
	I1025 09:22:18.765016   32993 pod_ready.go:86] duration metric: took 3.862366ms for pod "coredns-668d6bf9bc-pdk94" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:22:18.767211   32993 pod_ready.go:83] waiting for pod "etcd-test-preload-008752" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:22:20.773543   32993 pod_ready.go:94] pod "etcd-test-preload-008752" is "Ready"
	I1025 09:22:20.773571   32993 pod_ready.go:86] duration metric: took 2.006339322s for pod "etcd-test-preload-008752" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:22:20.780725   32993 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-008752" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:22:20.787778   32993 pod_ready.go:94] pod "kube-apiserver-test-preload-008752" is "Ready"
	I1025 09:22:20.787814   32993 pod_ready.go:86] duration metric: took 7.064539ms for pod "kube-apiserver-test-preload-008752" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:22:20.790606   32993 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-008752" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:22:22.796820   32993 pod_ready.go:94] pod "kube-controller-manager-test-preload-008752" is "Ready"
	I1025 09:22:22.796848   32993 pod_ready.go:86] duration metric: took 2.0062108s for pod "kube-controller-manager-test-preload-008752" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:22:22.958128   32993 pod_ready.go:83] waiting for pod "kube-proxy-4qzgc" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:22:23.358075   32993 pod_ready.go:94] pod "kube-proxy-4qzgc" is "Ready"
	I1025 09:22:23.358107   32993 pod_ready.go:86] duration metric: took 399.923194ms for pod "kube-proxy-4qzgc" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:22:23.556914   32993 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-008752" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:22:23.956466   32993 pod_ready.go:94] pod "kube-scheduler-test-preload-008752" is "Ready"
	I1025 09:22:23.956495   32993 pod_ready.go:86] duration metric: took 399.555405ms for pod "kube-scheduler-test-preload-008752" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:22:23.956508   32993 pod_ready.go:40] duration metric: took 5.20379417s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:22:24.000024   32993 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1025 09:22:24.002231   32993 out.go:203] 
	W1025 09:22:24.004229   32993 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1025 09:22:24.005465   32993 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1025 09:22:24.006681   32993 out.go:179] * Done! kubectl is now configured to use "test-preload-008752" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.807251455Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761384144807183513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1e57d3f-18ad-4293-b454-5de1d1c43bc5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.807922330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a517a83-3db1-4d0d-9964-19e07b28cd73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.807975713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a517a83-3db1-4d0d-9964-19e07b28cd73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.808137195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:69a4566a9137db46f54a13a69a440bed79193af0b379f71705c903e338625a7b,PodSandboxId:91262003615ea7d2ddabaae6bf0cb25499c75788192bb635ab4436f67527efc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761384137029951193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pdk94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e32c1f-1d63-41fb-99a5-dbd7728bb84d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edb650ffc98e0ada04437c7db6e9c0da52db558bd24963a32bad5146805e796,PodSandboxId:cf94d10c83893722aa2eef7d2989e3a1224e8c63d82dd9755650fa2d60ea7b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761384136803577675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bxgdm,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: c861e20f-5d5b-469c-92c4-d757407d2c99,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a371dea1472d99c660ab910bb6f4e398c3743b1afd297403b0ccc5e497f80c9,PodSandboxId:719ed7376294a0b521ab3a13fab16e941ad5956601455b27a884ba3e0af114d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_RUNNING,CreatedAt:1761384129443793049,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad4d99c-2c55-4e37-8275-d3b54d9e48f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84460575bb771fd87b99beed8a51d04e0cb9bfc09a2f27e48fa51e5e5bcd4bf,PodSandboxId:d1aee6515c724e4b7e84b42a923dfcce8e7718e4168410db5022d1262113dc20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedA
t:1761384129362589930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qzgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6925f8ea-222d-4a19-85b3-ed3dc1453713,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afe8c04154af2754eb921fb62a9804a2a267842b0b4d6e3467ec3ac77b25b1a1,PodSandboxId:20e164acdacd7d9f152041f319e8029f282f3c61a5dab233a3859383d07df3e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761384125552037881,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f082cb5917f24b0afc615947ba7cd972,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd60e2d406f6962e4cef8b3ee50e1fe83f8255be4147aefe1a44994fa2780a1f,PodSandboxId:3357d817945ba8a544069d0a5677e141ec61b319928f6587e59191321bc65895,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761384125515628455,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d2ce24daf57de5dbcaa1d6b86df6fb,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f29ce14bee1dfd3a3f6b91e91419bc4d47c2dddd14a45f7cbbfb8b8a8fa63bb,PodSandboxId:f520f6258437adbf8d818d9465d988455235f8f1ff737c33d038df8e4ae7bbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761384125497040278,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cdf0468813dab3ccf2b31e78bc231d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713d208160f714c3867f66ecb2921dded1e4bd6fa13c460e75b07b158c85b086,PodSandboxId:5edd84e120bfdc4f6c188fe3cec9cf3f5ee0fbf07812d9e6522be698cc50ad3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761384125432671966,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec7623b040903b2d6be43d593a6f3e75,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a517a83-3db1-4d0d-9964-19e07b28cd73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.850286421Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4496b0bd-387a-4428-b566-b09b34955643 name=/runtime.v1.RuntimeService/Version
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.850394166Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4496b0bd-387a-4428-b566-b09b34955643 name=/runtime.v1.RuntimeService/Version
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.851970040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7a4b797-65fc-4bb5-8e05-185cc3a6302f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.852800072Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761384144852729073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7a4b797-65fc-4bb5-8e05-185cc3a6302f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.853605675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5984749c-3fd6-4233-9baa-ea2204d3aab8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.853788221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5984749c-3fd6-4233-9baa-ea2204d3aab8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.854114764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:69a4566a9137db46f54a13a69a440bed79193af0b379f71705c903e338625a7b,PodSandboxId:91262003615ea7d2ddabaae6bf0cb25499c75788192bb635ab4436f67527efc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761384137029951193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pdk94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e32c1f-1d63-41fb-99a5-dbd7728bb84d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edb650ffc98e0ada04437c7db6e9c0da52db558bd24963a32bad5146805e796,PodSandboxId:cf94d10c83893722aa2eef7d2989e3a1224e8c63d82dd9755650fa2d60ea7b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761384136803577675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bxgdm,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: c861e20f-5d5b-469c-92c4-d757407d2c99,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a371dea1472d99c660ab910bb6f4e398c3743b1afd297403b0ccc5e497f80c9,PodSandboxId:719ed7376294a0b521ab3a13fab16e941ad5956601455b27a884ba3e0af114d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_RUNNING,CreatedAt:1761384129443793049,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad4d99c-2c55-4e37-8275-d3b54d9e48f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84460575bb771fd87b99beed8a51d04e0cb9bfc09a2f27e48fa51e5e5bcd4bf,PodSandboxId:d1aee6515c724e4b7e84b42a923dfcce8e7718e4168410db5022d1262113dc20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedA
t:1761384129362589930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qzgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6925f8ea-222d-4a19-85b3-ed3dc1453713,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afe8c04154af2754eb921fb62a9804a2a267842b0b4d6e3467ec3ac77b25b1a1,PodSandboxId:20e164acdacd7d9f152041f319e8029f282f3c61a5dab233a3859383d07df3e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761384125552037881,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f082cb5917f24b0afc615947ba7cd972,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd60e2d406f6962e4cef8b3ee50e1fe83f8255be4147aefe1a44994fa2780a1f,PodSandboxId:3357d817945ba8a544069d0a5677e141ec61b319928f6587e59191321bc65895,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761384125515628455,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d2ce24daf57de5dbcaa1d6b86df6fb,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f29ce14bee1dfd3a3f6b91e91419bc4d47c2dddd14a45f7cbbfb8b8a8fa63bb,PodSandboxId:f520f6258437adbf8d818d9465d988455235f8f1ff737c33d038df8e4ae7bbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761384125497040278,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cdf0468813dab3ccf2b31e78bc231d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713d208160f714c3867f66ecb2921dded1e4bd6fa13c460e75b07b158c85b086,PodSandboxId:5edd84e120bfdc4f6c188fe3cec9cf3f5ee0fbf07812d9e6522be698cc50ad3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761384125432671966,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec7623b040903b2d6be43d593a6f3e75,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5984749c-3fd6-4233-9baa-ea2204d3aab8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.896099354Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d814196d-abaa-4f11-bd5f-e0d9414ea9f3 name=/runtime.v1.RuntimeService/Version
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.896177436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d814196d-abaa-4f11-bd5f-e0d9414ea9f3 name=/runtime.v1.RuntimeService/Version
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.897693833Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a660c87-7c4d-49a0-af31-019aa1547c83 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.898139412Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761384144898115874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a660c87-7c4d-49a0-af31-019aa1547c83 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.898723405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea1e3278-c5e9-4940-84cc-54a286dba860 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.898996276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea1e3278-c5e9-4940-84cc-54a286dba860 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.899715492Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:69a4566a9137db46f54a13a69a440bed79193af0b379f71705c903e338625a7b,PodSandboxId:91262003615ea7d2ddabaae6bf0cb25499c75788192bb635ab4436f67527efc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761384137029951193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pdk94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e32c1f-1d63-41fb-99a5-dbd7728bb84d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edb650ffc98e0ada04437c7db6e9c0da52db558bd24963a32bad5146805e796,PodSandboxId:cf94d10c83893722aa2eef7d2989e3a1224e8c63d82dd9755650fa2d60ea7b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761384136803577675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bxgdm,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: c861e20f-5d5b-469c-92c4-d757407d2c99,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a371dea1472d99c660ab910bb6f4e398c3743b1afd297403b0ccc5e497f80c9,PodSandboxId:719ed7376294a0b521ab3a13fab16e941ad5956601455b27a884ba3e0af114d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_RUNNING,CreatedAt:1761384129443793049,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad4d99c-2c55-4e37-8275-d3b54d9e48f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84460575bb771fd87b99beed8a51d04e0cb9bfc09a2f27e48fa51e5e5bcd4bf,PodSandboxId:d1aee6515c724e4b7e84b42a923dfcce8e7718e4168410db5022d1262113dc20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedA
t:1761384129362589930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qzgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6925f8ea-222d-4a19-85b3-ed3dc1453713,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afe8c04154af2754eb921fb62a9804a2a267842b0b4d6e3467ec3ac77b25b1a1,PodSandboxId:20e164acdacd7d9f152041f319e8029f282f3c61a5dab233a3859383d07df3e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761384125552037881,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f082cb5917f24b0afc615947ba7cd972,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd60e2d406f6962e4cef8b3ee50e1fe83f8255be4147aefe1a44994fa2780a1f,PodSandboxId:3357d817945ba8a544069d0a5677e141ec61b319928f6587e59191321bc65895,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761384125515628455,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d2ce24daf57de5dbcaa1d6b86df6fb,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f29ce14bee1dfd3a3f6b91e91419bc4d47c2dddd14a45f7cbbfb8b8a8fa63bb,PodSandboxId:f520f6258437adbf8d818d9465d988455235f8f1ff737c33d038df8e4ae7bbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761384125497040278,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cdf0468813dab3ccf2b31e78bc231d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713d208160f714c3867f66ecb2921dded1e4bd6fa13c460e75b07b158c85b086,PodSandboxId:5edd84e120bfdc4f6c188fe3cec9cf3f5ee0fbf07812d9e6522be698cc50ad3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761384125432671966,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec7623b040903b2d6be43d593a6f3e75,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea1e3278-c5e9-4940-84cc-54a286dba860 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.935263781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf136e98-6d10-41f2-99b7-77c173305182 name=/runtime.v1.RuntimeService/Version
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.935351761Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf136e98-6d10-41f2-99b7-77c173305182 name=/runtime.v1.RuntimeService/Version
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.936797214Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09787f32-6160-4de0-839d-4b0401557c28 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.937285432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761384144937192448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09787f32-6160-4de0-839d-4b0401557c28 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.937883152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b81b0e9-a806-4188-949a-60691f3d6443 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.937954606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b81b0e9-a806-4188-949a-60691f3d6443 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 09:22:24 test-preload-008752 crio[858]: time="2025-10-25 09:22:24.938120561Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:69a4566a9137db46f54a13a69a440bed79193af0b379f71705c903e338625a7b,PodSandboxId:91262003615ea7d2ddabaae6bf0cb25499c75788192bb635ab4436f67527efc8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761384137029951193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pdk94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e32c1f-1d63-41fb-99a5-dbd7728bb84d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edb650ffc98e0ada04437c7db6e9c0da52db558bd24963a32bad5146805e796,PodSandboxId:cf94d10c83893722aa2eef7d2989e3a1224e8c63d82dd9755650fa2d60ea7b44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761384136803577675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bxgdm,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: c861e20f-5d5b-469c-92c4-d757407d2c99,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a371dea1472d99c660ab910bb6f4e398c3743b1afd297403b0ccc5e497f80c9,PodSandboxId:719ed7376294a0b521ab3a13fab16e941ad5956601455b27a884ba3e0af114d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_RUNNING,CreatedAt:1761384129443793049,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad4d99c-2c55-4e37-8275-d3b54d9e48f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84460575bb771fd87b99beed8a51d04e0cb9bfc09a2f27e48fa51e5e5bcd4bf,PodSandboxId:d1aee6515c724e4b7e84b42a923dfcce8e7718e4168410db5022d1262113dc20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedA
t:1761384129362589930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qzgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6925f8ea-222d-4a19-85b3-ed3dc1453713,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afe8c04154af2754eb921fb62a9804a2a267842b0b4d6e3467ec3ac77b25b1a1,PodSandboxId:20e164acdacd7d9f152041f319e8029f282f3c61a5dab233a3859383d07df3e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761384125552037881,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f082cb5917f24b0afc615947ba7cd972,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd60e2d406f6962e4cef8b3ee50e1fe83f8255be4147aefe1a44994fa2780a1f,PodSandboxId:3357d817945ba8a544069d0a5677e141ec61b319928f6587e59191321bc65895,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761384125515628455,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d2ce24daf57de5dbcaa1d6b86df6fb,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f29ce14bee1dfd3a3f6b91e91419bc4d47c2dddd14a45f7cbbfb8b8a8fa63bb,PodSandboxId:f520f6258437adbf8d818d9465d988455235f8f1ff737c33d038df8e4ae7bbf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761384125497040278,Labels:map[string]string{io.kubernetes.container.n
ame: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cdf0468813dab3ccf2b31e78bc231d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713d208160f714c3867f66ecb2921dded1e4bd6fa13c460e75b07b158c85b086,PodSandboxId:5edd84e120bfdc4f6c188fe3cec9cf3f5ee0fbf07812d9e6522be698cc50ad3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761384125432671966,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-008752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec7623b040903b2d6be43d593a6f3e75,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b81b0e9-a806-4188-949a-60691f3d6443 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	69a4566a9137d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   7 seconds ago       Running             coredns                   1                   91262003615ea       coredns-668d6bf9bc-pdk94
	7edb650ffc98e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   8 seconds ago       Running             coredns                   1                   cf94d10c83893       coredns-668d6bf9bc-bxgdm
	5a371dea1472d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   719ed7376294a       storage-provisioner
	c84460575bb77       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   15 seconds ago      Running             kube-proxy                1                   d1aee6515c724       kube-proxy-4qzgc
	afe8c04154af2       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   20e164acdacd7       kube-scheduler-test-preload-008752
	bd60e2d406f69       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   3357d817945ba       etcd-test-preload-008752
	5f29ce14bee1d       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   f520f6258437a       kube-controller-manager-test-preload-008752
	713d208160f71       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   5edd84e120bfd       kube-apiserver-test-preload-008752
	
	
	==> coredns [69a4566a9137db46f54a13a69a440bed79193af0b379f71705c903e338625a7b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57679 - 65288 "HINFO IN 533590793623908482.7735461839553742513. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.091259937s
	
	
	==> coredns [7edb650ffc98e0ada04437c7db6e9c0da52db558bd24963a32bad5146805e796] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48794 - 36795 "HINFO IN 3814954454910454418.1795934445522442614. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.09114227s
	
	
	==> describe nodes <==
	Name:               test-preload-008752
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-008752
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=test-preload-008752
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_21_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:21:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-008752
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:22:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:22:18 +0000   Sat, 25 Oct 2025 09:21:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:22:18 +0000   Sat, 25 Oct 2025 09:21:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:22:18 +0000   Sat, 25 Oct 2025 09:21:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:22:18 +0000   Sat, 25 Oct 2025 09:22:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.135
	  Hostname:    test-preload-008752
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dd23be26ae7423a856f1ccf3e6e7fd7
	  System UUID:                1dd23be2-6ae7-423a-856f-1ccf3e6e7fd7
	  Boot ID:                    41700035-b139-4e03-bc4f-6a65cffcb5a4
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-bxgdm                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     67s
	  kube-system                 coredns-668d6bf9bc-pdk94                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     67s
	  kube-system                 etcd-test-preload-008752                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         72s
	  kube-system                 kube-apiserver-test-preload-008752             250m (12%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-controller-manager-test-preload-008752    200m (10%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-proxy-4qzgc                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-scheduler-test-preload-008752             100m (5%)     0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             240Mi (8%)  340Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 65s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   NodeHasSufficientMemory  71s                kubelet          Node test-preload-008752 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    71s                kubelet          Node test-preload-008752 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s                kubelet          Node test-preload-008752 status is now: NodeHasSufficientPID
	  Normal   Starting                 71s                kubelet          Starting kubelet.
	  Normal   NodeReady                70s                kubelet          Node test-preload-008752 status is now: NodeReady
	  Normal   RegisteredNode           68s                node-controller  Node test-preload-008752 event: Registered Node test-preload-008752 in Controller
	  Normal   Starting                 22s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  21s (x8 over 22s)  kubelet          Node test-preload-008752 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 22s)  kubelet          Node test-preload-008752 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 22s)  kubelet          Node test-preload-008752 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 17s                kubelet          Node test-preload-008752 has been rebooted, boot id: 41700035-b139-4e03-bc4f-6a65cffcb5a4
	  Normal   RegisteredNode           14s                node-controller  Node test-preload-008752 event: Registered Node test-preload-008752 in Controller
	
	
	==> dmesg <==
	[Oct25 09:21] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004807] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.995364] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct25 09:22] kauditd_printk_skb: 88 callbacks suppressed
	[  +5.575894] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.000019] kauditd_printk_skb: 128 callbacks suppressed
	[  +7.473759] kauditd_printk_skb: 122 callbacks suppressed
	
	
	==> etcd [bd60e2d406f6962e4cef8b3ee50e1fe83f8255be4147aefe1a44994fa2780a1f] <==
	{"level":"info","ts":"2025-10-25T09:22:05.948186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a24066339cb4fbfd switched to configuration voters=(11691457004512279549)"}
	{"level":"info","ts":"2025-10-25T09:22:05.948304Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7263b87883d60113","local-member-id":"a24066339cb4fbfd","added-peer-id":"a24066339cb4fbfd","added-peer-peer-urls":["https://192.168.39.135:2380"]}
	{"level":"info","ts":"2025-10-25T09:22:05.948413Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7263b87883d60113","local-member-id":"a24066339cb4fbfd","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:22:05.948451Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:22:05.951985Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T09:22:05.952365Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"a24066339cb4fbfd","initial-advertise-peer-urls":["https://192.168.39.135:2380"],"listen-peer-urls":["https://192.168.39.135:2380"],"advertise-client-urls":["https://192.168.39.135:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.135:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-25T09:22:05.952413Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T09:22:05.952539Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.135:2380"}
	{"level":"info","ts":"2025-10-25T09:22:05.952567Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.135:2380"}
	{"level":"info","ts":"2025-10-25T09:22:07.220718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a24066339cb4fbfd is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-25T09:22:07.220760Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a24066339cb4fbfd became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-25T09:22:07.220795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a24066339cb4fbfd received MsgPreVoteResp from a24066339cb4fbfd at term 2"}
	{"level":"info","ts":"2025-10-25T09:22:07.220815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a24066339cb4fbfd became candidate at term 3"}
	{"level":"info","ts":"2025-10-25T09:22:07.220831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a24066339cb4fbfd received MsgVoteResp from a24066339cb4fbfd at term 3"}
	{"level":"info","ts":"2025-10-25T09:22:07.220842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a24066339cb4fbfd became leader at term 3"}
	{"level":"info","ts":"2025-10-25T09:22:07.220850Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a24066339cb4fbfd elected leader a24066339cb4fbfd at term 3"}
	{"level":"info","ts":"2025-10-25T09:22:07.223892Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"a24066339cb4fbfd","local-member-attributes":"{Name:test-preload-008752 ClientURLs:[https://192.168.39.135:2379]}","request-path":"/0/members/a24066339cb4fbfd/attributes","cluster-id":"7263b87883d60113","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T09:22:07.223899Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:22:07.224129Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:22:07.224503Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T09:22:07.224548Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-25T09:22:07.225035Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-25T09:22:07.225711Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T09:22:07.225061Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-25T09:22:07.226890Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.135:2379"}
	
	
	==> kernel <==
	 09:22:25 up 0 min,  0 users,  load average: 0.72, 0.19, 0.06
	Linux test-preload-008752 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [713d208160f714c3867f66ecb2921dded1e4bd6fa13c460e75b07b158c85b086] <==
	I1025 09:22:08.409488       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:22:08.409836       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1025 09:22:08.409881       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:22:08.409900       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:22:08.409914       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:22:08.409934       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:22:08.412065       1 shared_informer.go:320] Caches are synced for configmaps
	I1025 09:22:08.412169       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:22:08.421304       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1025 09:22:08.449037       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1025 09:22:08.449646       1 policy_source.go:240] refreshing policies
	I1025 09:22:08.480820       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:22:08.480857       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:22:08.482421       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:22:08.488877       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:22:08.527394       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:22:08.970252       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1025 09:22:09.285635       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:22:10.191619       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1025 09:22:10.234613       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1025 09:22:10.269964       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:22:10.284191       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:22:11.714719       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1025 09:22:11.914309       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:22:11.967426       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5f29ce14bee1dfd3a3f6b91e91419bc4d47c2dddd14a45f7cbbfb8b8a8fa63bb] <==
	I1025 09:22:11.663718       1 shared_informer.go:320] Caches are synced for disruption
	I1025 09:22:11.663812       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1025 09:22:11.669080       1 shared_informer.go:320] Caches are synced for TTL
	I1025 09:22:11.678757       1 shared_informer.go:320] Caches are synced for node
	I1025 09:22:11.678916       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:22:11.678980       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:22:11.679043       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1025 09:22:11.679081       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1025 09:22:11.679294       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-008752"
	I1025 09:22:11.682990       1 shared_informer.go:320] Caches are synced for attach detach
	I1025 09:22:11.686626       1 shared_informer.go:320] Caches are synced for endpoint
	I1025 09:22:11.693178       1 shared_informer.go:320] Caches are synced for job
	I1025 09:22:11.694683       1 shared_informer.go:320] Caches are synced for deployment
	I1025 09:22:11.727808       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="78.189592ms"
	I1025 09:22:11.727912       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.377µs"
	I1025 09:22:17.111315       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="368.546µs"
	I1025 09:22:17.161882       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.246503ms"
	I1025 09:22:17.163925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="105.868µs"
	I1025 09:22:18.098765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="44.207µs"
	I1025 09:22:18.136424       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1025 09:22:18.137179       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="15.642373ms"
	I1025 09:22:18.137581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="141.397µs"
	I1025 09:22:18.569624       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-008752"
	I1025 09:22:18.583788       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-008752"
	I1025 09:22:21.623595       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c84460575bb771fd87b99beed8a51d04e0cb9bfc09a2f27e48fa51e5e5bcd4bf] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1025 09:22:09.711074       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1025 09:22:09.721042       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.135"]
	E1025 09:22:09.721124       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:22:09.760260       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1025 09:22:09.760359       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 09:22:09.760396       1 server_linux.go:170] "Using iptables Proxier"
	I1025 09:22:09.763361       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:22:09.763748       1 server.go:497] "Version info" version="v1.32.0"
	I1025 09:22:09.763774       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:22:09.765971       1 config.go:199] "Starting service config controller"
	I1025 09:22:09.766014       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1025 09:22:09.766041       1 config.go:105] "Starting endpoint slice config controller"
	I1025 09:22:09.766045       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1025 09:22:09.768016       1 config.go:329] "Starting node config controller"
	I1025 09:22:09.768073       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1025 09:22:09.866155       1 shared_informer.go:320] Caches are synced for service config
	I1025 09:22:09.866295       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1025 09:22:09.868368       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [afe8c04154af2754eb921fb62a9804a2a267842b0b4d6e3467ec3ac77b25b1a1] <==
	I1025 09:22:06.358737       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:22:08.353686       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:22:08.353728       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:22:08.353738       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:22:08.353748       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:22:08.447593       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1025 09:22:08.447739       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:22:08.457041       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1025 09:22:08.457283       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:22:08.457386       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 09:22:08.457477       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:22:08.557477       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 09:22:08 test-preload-008752 kubelet[1186]: E1025 09:22:08.966056    1186 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/49e32c1f-1d63-41fb-99a5-dbd7728bb84d-config-volume podName:49e32c1f-1d63-41fb-99a5-dbd7728bb84d nodeName:}" failed. No retries permitted until 2025-10-25 09:22:09.466037193 +0000 UTC m=+5.658232803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/49e32c1f-1d63-41fb-99a5-dbd7728bb84d-config-volume") pod "coredns-668d6bf9bc-pdk94" (UID: "49e32c1f-1d63-41fb-99a5-dbd7728bb84d") : object "kube-system"/"coredns" not registered
	Oct 25 09:22:08 test-preload-008752 kubelet[1186]: E1025 09:22:08.966130    1186 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 09:22:08 test-preload-008752 kubelet[1186]: E1025 09:22:08.966172    1186 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c861e20f-5d5b-469c-92c4-d757407d2c99-config-volume podName:c861e20f-5d5b-469c-92c4-d757407d2c99 nodeName:}" failed. No retries permitted until 2025-10-25 09:22:09.466161001 +0000 UTC m=+5.658356611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c861e20f-5d5b-469c-92c4-d757407d2c99-config-volume") pod "coredns-668d6bf9bc-bxgdm" (UID: "c861e20f-5d5b-469c-92c4-d757407d2c99") : object "kube-system"/"coredns" not registered
	Oct 25 09:22:08 test-preload-008752 kubelet[1186]: E1025 09:22:08.988487    1186 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Oct 25 09:22:09 test-preload-008752 kubelet[1186]: E1025 09:22:09.470798    1186 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 09:22:09 test-preload-008752 kubelet[1186]: E1025 09:22:09.470883    1186 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/49e32c1f-1d63-41fb-99a5-dbd7728bb84d-config-volume podName:49e32c1f-1d63-41fb-99a5-dbd7728bb84d nodeName:}" failed. No retries permitted until 2025-10-25 09:22:10.470868094 +0000 UTC m=+6.663063707 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/49e32c1f-1d63-41fb-99a5-dbd7728bb84d-config-volume") pod "coredns-668d6bf9bc-pdk94" (UID: "49e32c1f-1d63-41fb-99a5-dbd7728bb84d") : object "kube-system"/"coredns" not registered
	Oct 25 09:22:09 test-preload-008752 kubelet[1186]: E1025 09:22:09.470918    1186 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 09:22:09 test-preload-008752 kubelet[1186]: E1025 09:22:09.470939    1186 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c861e20f-5d5b-469c-92c4-d757407d2c99-config-volume podName:c861e20f-5d5b-469c-92c4-d757407d2c99 nodeName:}" failed. No retries permitted until 2025-10-25 09:22:10.470932263 +0000 UTC m=+6.663127872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c861e20f-5d5b-469c-92c4-d757407d2c99-config-volume") pod "coredns-668d6bf9bc-bxgdm" (UID: "c861e20f-5d5b-469c-92c4-d757407d2c99") : object "kube-system"/"coredns" not registered
	Oct 25 09:22:09 test-preload-008752 kubelet[1186]: E1025 09:22:09.953056    1186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-bxgdm" podUID="c861e20f-5d5b-469c-92c4-d757407d2c99"
	Oct 25 09:22:10 test-preload-008752 kubelet[1186]: E1025 09:22:10.477546    1186 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 09:22:10 test-preload-008752 kubelet[1186]: E1025 09:22:10.477631    1186 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c861e20f-5d5b-469c-92c4-d757407d2c99-config-volume podName:c861e20f-5d5b-469c-92c4-d757407d2c99 nodeName:}" failed. No retries permitted until 2025-10-25 09:22:12.477617481 +0000 UTC m=+8.669813103 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c861e20f-5d5b-469c-92c4-d757407d2c99-config-volume") pod "coredns-668d6bf9bc-bxgdm" (UID: "c861e20f-5d5b-469c-92c4-d757407d2c99") : object "kube-system"/"coredns" not registered
	Oct 25 09:22:10 test-preload-008752 kubelet[1186]: E1025 09:22:10.477657    1186 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 09:22:10 test-preload-008752 kubelet[1186]: E1025 09:22:10.477739    1186 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/49e32c1f-1d63-41fb-99a5-dbd7728bb84d-config-volume podName:49e32c1f-1d63-41fb-99a5-dbd7728bb84d nodeName:}" failed. No retries permitted until 2025-10-25 09:22:12.477725933 +0000 UTC m=+8.669921543 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/49e32c1f-1d63-41fb-99a5-dbd7728bb84d-config-volume") pod "coredns-668d6bf9bc-pdk94" (UID: "49e32c1f-1d63-41fb-99a5-dbd7728bb84d") : object "kube-system"/"coredns" not registered
	Oct 25 09:22:10 test-preload-008752 kubelet[1186]: E1025 09:22:10.955956    1186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-pdk94" podUID="49e32c1f-1d63-41fb-99a5-dbd7728bb84d"
	Oct 25 09:22:11 test-preload-008752 kubelet[1186]: E1025 09:22:11.953493    1186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-bxgdm" podUID="c861e20f-5d5b-469c-92c4-d757407d2c99"
	Oct 25 09:22:12 test-preload-008752 kubelet[1186]: E1025 09:22:12.497163    1186 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 09:22:12 test-preload-008752 kubelet[1186]: E1025 09:22:12.497306    1186 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/49e32c1f-1d63-41fb-99a5-dbd7728bb84d-config-volume podName:49e32c1f-1d63-41fb-99a5-dbd7728bb84d nodeName:}" failed. No retries permitted until 2025-10-25 09:22:16.497291901 +0000 UTC m=+12.689487514 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/49e32c1f-1d63-41fb-99a5-dbd7728bb84d-config-volume") pod "coredns-668d6bf9bc-pdk94" (UID: "49e32c1f-1d63-41fb-99a5-dbd7728bb84d") : object "kube-system"/"coredns" not registered
	Oct 25 09:22:12 test-preload-008752 kubelet[1186]: E1025 09:22:12.497480    1186 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 09:22:12 test-preload-008752 kubelet[1186]: E1025 09:22:12.497534    1186 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c861e20f-5d5b-469c-92c4-d757407d2c99-config-volume podName:c861e20f-5d5b-469c-92c4-d757407d2c99 nodeName:}" failed. No retries permitted until 2025-10-25 09:22:16.497521409 +0000 UTC m=+12.689717019 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c861e20f-5d5b-469c-92c4-d757407d2c99-config-volume") pod "coredns-668d6bf9bc-bxgdm" (UID: "c861e20f-5d5b-469c-92c4-d757407d2c99") : object "kube-system"/"coredns" not registered
	Oct 25 09:22:12 test-preload-008752 kubelet[1186]: E1025 09:22:12.953797    1186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-pdk94" podUID="49e32c1f-1d63-41fb-99a5-dbd7728bb84d"
	Oct 25 09:22:13 test-preload-008752 kubelet[1186]: E1025 09:22:13.954931    1186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-bxgdm" podUID="c861e20f-5d5b-469c-92c4-d757407d2c99"
	Oct 25 09:22:13 test-preload-008752 kubelet[1186]: E1025 09:22:13.988252    1186 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761384133987634914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 09:22:13 test-preload-008752 kubelet[1186]: E1025 09:22:13.988292    1186 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761384133987634914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 09:22:23 test-preload-008752 kubelet[1186]: E1025 09:22:23.996831    1186 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761384143996533680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 09:22:23 test-preload-008752 kubelet[1186]: E1025 09:22:23.996861    1186 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761384143996533680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [5a371dea1472d99c660ab910bb6f4e398c3743b1afd297403b0ccc5e497f80c9] <==
	I1025 09:22:09.655372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-008752 -n test-preload-008752
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-008752 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-008752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-008752
--- FAIL: TestPreload (122.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.77s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-220312 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-220312 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.962243897s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-220312] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-220312" primary control-plane node in "pause-220312" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-220312" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:29:46.339773   40412 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:29:46.340078   40412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:29:46.340092   40412 out.go:374] Setting ErrFile to fd 2...
	I1025 09:29:46.340098   40412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:29:46.340434   40412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
	I1025 09:29:46.341069   40412 out.go:368] Setting JSON to false
	I1025 09:29:46.342378   40412 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4336,"bootTime":1761380250,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:29:46.342502   40412 start.go:141] virtualization: kvm guest
	I1025 09:29:46.344728   40412 out.go:179] * [pause-220312] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:29:46.346376   40412 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:29:46.346558   40412 notify.go:220] Checking for updates...
	I1025 09:29:46.348833   40412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:29:46.350475   40412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	I1025 09:29:46.351861   40412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 09:29:46.353140   40412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:29:46.354340   40412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:29:46.356390   40412 config.go:182] Loaded profile config "pause-220312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:29:46.357004   40412 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:29:46.403376   40412 out.go:179] * Using the kvm2 driver based on existing profile
	I1025 09:29:46.404831   40412 start.go:305] selected driver: kvm2
	I1025 09:29:46.404857   40412 start.go:925] validating driver "kvm2" against &{Name:pause-220312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-220312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:29:46.405019   40412 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:29:46.405979   40412 cni.go:84] Creating CNI manager for ""
	I1025 09:29:46.406042   40412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 09:29:46.406113   40412 start.go:349] cluster config:
	{Name:pause-220312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-220312 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:29:46.406270   40412 iso.go:125] acquiring lock: {Name:mk56ae07ef3e2fe29ebca77d84768cf173c5b3d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:29:46.408774   40412 out.go:179] * Starting "pause-220312" primary control-plane node in "pause-220312" cluster
	I1025 09:29:46.410319   40412 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:29:46.410362   40412 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:29:46.410370   40412 cache.go:58] Caching tarball of preloaded images
	I1025 09:29:46.410451   40412 preload.go:233] Found /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:29:46.410467   40412 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:29:46.410631   40412 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/config.json ...
	I1025 09:29:46.410870   40412 start.go:360] acquireMachinesLock for pause-220312: {Name:mk307ae3583c207a47794987d4930662cf65d417 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 09:29:46.410924   40412 start.go:364] duration metric: took 34.298µs to acquireMachinesLock for "pause-220312"
	I1025 09:29:46.410938   40412 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:29:46.410946   40412 fix.go:54] fixHost starting: 
	I1025 09:29:46.413212   40412 fix.go:112] recreateIfNeeded on pause-220312: state=Running err=<nil>
	W1025 09:29:46.413266   40412 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 09:29:46.414894   40412 out.go:252] * Updating the running kvm2 "pause-220312" VM ...
	I1025 09:29:46.414925   40412 machine.go:93] provisionDockerMachine start ...
	I1025 09:29:46.418040   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:46.418616   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:46.418656   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:46.418807   40412 main.go:141] libmachine: Using SSH client type: native
	I1025 09:29:46.419120   40412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1025 09:29:46.419136   40412 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:29:46.547648   40412 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-220312
	
	I1025 09:29:46.547687   40412 buildroot.go:166] provisioning hostname "pause-220312"
	I1025 09:29:46.552030   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:46.552645   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:46.552680   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:46.552895   40412 main.go:141] libmachine: Using SSH client type: native
	I1025 09:29:46.553246   40412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1025 09:29:46.553267   40412 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-220312 && echo "pause-220312" | sudo tee /etc/hostname
	I1025 09:29:46.709571   40412 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-220312
	
	I1025 09:29:46.713001   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:46.713702   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:46.713754   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:46.714162   40412 main.go:141] libmachine: Using SSH client type: native
	I1025 09:29:46.714440   40412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1025 09:29:46.714463   40412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-220312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-220312/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-220312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:29:46.838316   40412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:29:46.838362   40412 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21796-5973/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-5973/.minikube}
	I1025 09:29:46.838392   40412 buildroot.go:174] setting up certificates
	I1025 09:29:46.838410   40412 provision.go:84] configureAuth start
	I1025 09:29:46.842067   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:46.842625   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:46.842681   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:46.846283   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:46.846733   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:46.846764   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:46.846898   40412 provision.go:143] copyHostCerts
	I1025 09:29:46.846951   40412 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5973/.minikube/ca.pem, removing ...
	I1025 09:29:46.846964   40412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5973/.minikube/ca.pem
	I1025 09:29:46.847061   40412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-5973/.minikube/ca.pem (1078 bytes)
	I1025 09:29:46.847170   40412 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5973/.minikube/cert.pem, removing ...
	I1025 09:29:46.847180   40412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5973/.minikube/cert.pem
	I1025 09:29:46.847218   40412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-5973/.minikube/cert.pem (1123 bytes)
	I1025 09:29:46.847322   40412 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-5973/.minikube/key.pem, removing ...
	I1025 09:29:46.847335   40412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-5973/.minikube/key.pem
	I1025 09:29:46.847374   40412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-5973/.minikube/key.pem (1679 bytes)
	I1025 09:29:46.847467   40412 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca-key.pem org=jenkins.pause-220312 san=[127.0.0.1 192.168.61.192 localhost minikube pause-220312]
	I1025 09:29:47.103575   40412 provision.go:177] copyRemoteCerts
	I1025 09:29:47.103642   40412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:29:47.106825   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:47.107409   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:47.107440   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:47.107646   40412 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/pause-220312/id_rsa Username:docker}
	I1025 09:29:47.202380   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:29:47.244261   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1025 09:29:47.278036   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:29:47.328872   40412 provision.go:87] duration metric: took 490.440854ms to configureAuth
	I1025 09:29:47.328910   40412 buildroot.go:189] setting minikube options for container-runtime
	I1025 09:29:47.329257   40412 config.go:182] Loaded profile config "pause-220312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:29:47.333271   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:47.333870   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:47.333911   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:47.334263   40412 main.go:141] libmachine: Using SSH client type: native
	I1025 09:29:47.334578   40412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1025 09:29:47.334602   40412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:29:52.945358   40412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:29:52.945390   40412 machine.go:96] duration metric: took 6.530456421s to provisionDockerMachine
	I1025 09:29:52.945403   40412 start.go:293] postStartSetup for "pause-220312" (driver="kvm2")
	I1025 09:29:52.945415   40412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:29:52.945492   40412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:29:52.950206   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:52.950873   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:52.950921   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:52.951120   40412 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/pause-220312/id_rsa Username:docker}
	I1025 09:29:53.048390   40412 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:29:53.055264   40412 info.go:137] Remote host: Buildroot 2025.02
	I1025 09:29:53.055303   40412 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5973/.minikube/addons for local assets ...
	I1025 09:29:53.055377   40412 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5973/.minikube/files for local assets ...
	I1025 09:29:53.055493   40412 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem -> 98812.pem in /etc/ssl/certs
	I1025 09:29:53.055681   40412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:29:53.073197   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem --> /etc/ssl/certs/98812.pem (1708 bytes)
	I1025 09:29:53.106798   40412 start.go:296] duration metric: took 161.355117ms for postStartSetup
	I1025 09:29:53.106856   40412 fix.go:56] duration metric: took 6.695908107s for fixHost
	I1025 09:29:53.110512   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.111047   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:53.111097   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.111390   40412 main.go:141] libmachine: Using SSH client type: native
	I1025 09:29:53.111692   40412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1025 09:29:53.111708   40412 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 09:29:53.231369   40412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761384593.227712471
	
	I1025 09:29:53.231394   40412 fix.go:216] guest clock: 1761384593.227712471
	I1025 09:29:53.231404   40412 fix.go:229] Guest: 2025-10-25 09:29:53.227712471 +0000 UTC Remote: 2025-10-25 09:29:53.106861508 +0000 UTC m=+6.843347300 (delta=120.850963ms)
	I1025 09:29:53.231456   40412 fix.go:200] guest clock delta is within tolerance: 120.850963ms
	I1025 09:29:53.231466   40412 start.go:83] releasing machines lock for "pause-220312", held for 6.820531996s
	I1025 09:29:53.234859   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.235313   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:53.235346   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.236093   40412 ssh_runner.go:195] Run: cat /version.json
	I1025 09:29:53.236134   40412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:29:53.239460   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.239903   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:53.239933   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.239959   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.240136   40412 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/pause-220312/id_rsa Username:docker}
	I1025 09:29:53.240618   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:53.240651   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.240841   40412 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/pause-220312/id_rsa Username:docker}
	I1025 09:29:53.360875   40412 ssh_runner.go:195] Run: systemctl --version
	I1025 09:29:53.368199   40412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:29:53.524671   40412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:29:53.536824   40412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:29:53.536900   40412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:29:53.550936   40412 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:29:53.550968   40412 start.go:495] detecting cgroup driver to use...
	I1025 09:29:53.551049   40412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:29:53.575474   40412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:29:53.595751   40412 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:29:53.595848   40412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:29:53.616773   40412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:29:53.634871   40412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:29:53.835822   40412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:29:54.042928   40412 docker.go:234] disabling docker service ...
	I1025 09:29:54.043012   40412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:29:54.074937   40412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:29:54.092816   40412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:29:54.300787   40412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:29:54.498366   40412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:29:54.519343   40412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:29:54.615128   40412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:29:54.615217   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.647393   40412 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:29:54.647460   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.674617   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.713988   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.741287   40412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:29:54.759322   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.773742   40412 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.793448   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.818581   40412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:29:54.836610   40412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:29:54.870041   40412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:29:55.236756   40412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:29:56.014495   40412 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:29:56.014573   40412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:29:56.020977   40412 start.go:563] Will wait 60s for crictl version
	I1025 09:29:56.021061   40412 ssh_runner.go:195] Run: which crictl
	I1025 09:29:56.025408   40412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 09:29:56.064944   40412 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 09:29:56.065041   40412 ssh_runner.go:195] Run: crio --version
	I1025 09:29:56.096272   40412 ssh_runner.go:195] Run: crio --version
	I1025 09:29:56.129041   40412 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1025 09:29:56.133423   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:56.134041   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:56.134069   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:56.134339   40412 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1025 09:29:56.139340   40412 kubeadm.go:883] updating cluster {Name:pause-220312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-220312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:29:56.139528   40412 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:29:56.139608   40412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:29:56.199404   40412 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:29:56.199435   40412 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:29:56.199484   40412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:29:56.238111   40412 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:29:56.238136   40412 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:29:56.238143   40412 kubeadm.go:934] updating node { 192.168.61.192 8443 v1.34.1 crio true true} ...
	I1025 09:29:56.238253   40412 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-220312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-220312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:29:56.238352   40412 ssh_runner.go:195] Run: crio config
	I1025 09:29:56.293607   40412 cni.go:84] Creating CNI manager for ""
	I1025 09:29:56.293633   40412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 09:29:56.293651   40412 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:29:56.293672   40412 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.192 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-220312 NodeName:pause-220312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.192"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.192 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:29:56.293785   40412 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.192
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-220312"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.192"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.192"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:29:56.293848   40412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:29:56.306746   40412 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:29:56.306841   40412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:29:56.319853   40412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1025 09:29:56.342963   40412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:29:56.366632   40412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1025 09:29:56.393174   40412 ssh_runner.go:195] Run: grep 192.168.61.192	control-plane.minikube.internal$ /etc/hosts
	I1025 09:29:56.398476   40412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:29:56.588943   40412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:29:56.609304   40412 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312 for IP: 192.168.61.192
	I1025 09:29:56.609336   40412 certs.go:195] generating shared ca certs ...
	I1025 09:29:56.609358   40412 certs.go:227] acquiring lock for ca certs: {Name:mke8d6ba2f98d813f76972dbfee9daa2e84822df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:29:56.609544   40412 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key
	I1025 09:29:56.609596   40412 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key
	I1025 09:29:56.609606   40412 certs.go:257] generating profile certs ...
	I1025 09:29:56.609696   40412 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/client.key
	I1025 09:29:56.609761   40412 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/apiserver.key.67d2603a
	I1025 09:29:56.609804   40412 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/proxy-client.key
	I1025 09:29:56.609940   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/9881.pem (1338 bytes)
	W1025 09:29:56.609974   40412 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5973/.minikube/certs/9881_empty.pem, impossibly tiny 0 bytes
	I1025 09:29:56.609986   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:29:56.610022   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:29:56.610052   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:29:56.610077   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/key.pem (1679 bytes)
	I1025 09:29:56.610121   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem (1708 bytes)
	I1025 09:29:56.610724   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:29:56.644137   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 09:29:56.689075   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:29:56.781784   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1025 09:29:56.851328   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:29:56.972258   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:29:57.046703   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:29:57.133956   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:29:57.203953   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/certs/9881.pem --> /usr/share/ca-certificates/9881.pem (1338 bytes)
	I1025 09:29:57.271624   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem --> /usr/share/ca-certificates/98812.pem (1708 bytes)
	I1025 09:29:57.332289   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:29:57.389221   40412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:29:57.423056   40412 ssh_runner.go:195] Run: openssl version
	I1025 09:29:57.432964   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9881.pem && ln -fs /usr/share/ca-certificates/9881.pem /etc/ssl/certs/9881.pem"
	I1025 09:29:57.448713   40412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9881.pem
	I1025 09:29:57.460379   40412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/9881.pem
	I1025 09:29:57.460449   40412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9881.pem
	I1025 09:29:57.478498   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9881.pem /etc/ssl/certs/51391683.0"
	I1025 09:29:57.496222   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98812.pem && ln -fs /usr/share/ca-certificates/98812.pem /etc/ssl/certs/98812.pem"
	I1025 09:29:57.515389   40412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98812.pem
	I1025 09:29:57.521863   40412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/98812.pem
	I1025 09:29:57.521945   40412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98812.pem
	I1025 09:29:57.532684   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98812.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:29:57.551523   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:29:57.573217   40412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:29:57.589306   40412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:29:57.589382   40412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:29:57.604861   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:29:57.626796   40412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:29:57.633461   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:29:57.644805   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:29:57.653436   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:29:57.661035   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:29:57.668978   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:29:57.678104   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:29:57.686136   40412 kubeadm.go:400] StartCluster: {Name:pause-220312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-220312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:29:57.686266   40412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:29:57.686344   40412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:29:57.747038   40412 cri.go:89] found id: "07859464f9166fa1d7359a66f8ed9dc5a1487e3cddc919b6855ae3aad361bbf9"
	I1025 09:29:57.747074   40412 cri.go:89] found id: "720238c583854abcd754418c8719916550d6451a5fccbd31e2487ba1cb6319e5"
	I1025 09:29:57.747080   40412 cri.go:89] found id: "45c34007f8cf7266a1b9d58dcbba1b9b782b0649d2a43b84465d34f37bde6b9e"
	I1025 09:29:57.747085   40412 cri.go:89] found id: "c03ec903227da1dce0853648db885c6182b1104bf9ca1362bc3d2a58fdbe0ac0"
	I1025 09:29:57.747089   40412 cri.go:89] found id: "a2578bb7857d5b98c139a5fea6bb84ea8a16422469161555c5ef98aa376dc265"
	I1025 09:29:57.747093   40412 cri.go:89] found id: "89f34eaf4ef2646f3d7486eefea7402bd59f91ef73117f34c41f6b37a07e0749"
	I1025 09:29:57.747097   40412 cri.go:89] found id: "7ea2b3eef94de4b65c75d98f542afa63104d498b22d389cca485c22d95e19a8e"
	I1025 09:29:57.747102   40412 cri.go:89] found id: ""
	I1025 09:29:57.747155   40412 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-220312 -n pause-220312
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-220312 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-220312 logs -n 25: (1.8360894s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p NoKubernetes-024391 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-024391       │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-026829 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ running-upgrade-026829    │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │                     │
	│ stop    │ -p NoKubernetes-024391                                                                                                                                                                                                  │ NoKubernetes-024391       │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ delete  │ -p running-upgrade-026829                                                                                                                                                                                               │ running-upgrade-026829    │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ start   │ -p NoKubernetes-024391 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-024391       │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ start   │ -p cert-expiration-097778 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-097778    │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:28 UTC │
	│ delete  │ -p kubernetes-upgrade-254344                                                                                                                                                                                            │ kubernetes-upgrade-254344 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ start   │ -p cert-options-585228 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-585228       │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:28 UTC │
	│ ssh     │ -p NoKubernetes-024391 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-024391       │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │                     │
	│ delete  │ -p NoKubernetes-024391                                                                                                                                                                                                  │ NoKubernetes-024391       │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ start   │ -p pause-220312 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-220312              │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ force-systemd-flag-811701 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-811701 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ delete  │ -p force-systemd-flag-811701                                                                                                                                                                                            │ force-systemd-flag-811701 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ start   │ -p stopped-upgrade-196082 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-196082    │ jenkins │ v1.32.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ cert-options-585228 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-585228       │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ ssh     │ -p cert-options-585228 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-585228       │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ delete  │ -p cert-options-585228                                                                                                                                                                                                  │ cert-options-585228       │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ start   │ -p auto-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-816358               │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:30 UTC │
	│ stop    │ stopped-upgrade-196082 stop                                                                                                                                                                                             │ stopped-upgrade-196082    │ jenkins │ v1.32.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p stopped-upgrade-196082 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ stopped-upgrade-196082    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p pause-220312 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-220312              │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:30 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-196082 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-196082    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │                     │
	│ delete  │ -p stopped-upgrade-196082                                                                                                                                                                                               │ stopped-upgrade-196082    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p kindnet-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                                                                                  │ kindnet-816358            │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p auto-816358 pgrep -a kubelet                                                                                                                                                                                         │ auto-816358               │ jenkins │ v1.37.0 │ 25 Oct 25 09:30 UTC │ 25 Oct 25 09:30 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:29:55
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:29:55.534982   40549 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:29:55.535309   40549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:29:55.535319   40549 out.go:374] Setting ErrFile to fd 2...
	I1025 09:29:55.535323   40549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:29:55.535559   40549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
	I1025 09:29:55.536087   40549 out.go:368] Setting JSON to false
	I1025 09:29:55.537025   40549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4346,"bootTime":1761380250,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:29:55.537112   40549 start.go:141] virtualization: kvm guest
	I1025 09:29:55.539461   40549 out.go:179] * [kindnet-816358] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:29:55.541232   40549 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:29:55.541220   40549 notify.go:220] Checking for updates...
	I1025 09:29:55.544635   40549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:29:55.546398   40549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	I1025 09:29:55.547898   40549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 09:29:55.549433   40549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:29:55.550822   40549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:29:55.553189   40549 config.go:182] Loaded profile config "auto-816358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:29:55.553391   40549 config.go:182] Loaded profile config "cert-expiration-097778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:29:55.553602   40549 config.go:182] Loaded profile config "pause-220312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:29:55.553765   40549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:29:55.597207   40549 out.go:179] * Using the kvm2 driver based on user configuration
	I1025 09:29:55.598604   40549 start.go:305] selected driver: kvm2
	I1025 09:29:55.598623   40549 start.go:925] validating driver "kvm2" against <nil>
	I1025 09:29:55.598635   40549 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:29:55.599499   40549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:29:55.599742   40549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:29:55.599769   40549 cni.go:84] Creating CNI manager for "kindnet"
	I1025 09:29:55.599775   40549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:29:55.599822   40549 start.go:349] cluster config:
	{Name:kindnet-816358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-816358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:29:55.599947   40549 iso.go:125] acquiring lock: {Name:mk56ae07ef3e2fe29ebca77d84768cf173c5b3d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:29:55.601747   40549 out.go:179] * Starting "kindnet-816358" primary control-plane node in "kindnet-816358" cluster
	I1025 09:29:55.603008   40549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:29:55.603060   40549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:29:55.603073   40549 cache.go:58] Caching tarball of preloaded images
	I1025 09:29:55.603174   40549 preload.go:233] Found /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:29:55.603189   40549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:29:55.603353   40549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/config.json ...
	I1025 09:29:55.603382   40549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/config.json: {Name:mkab1a2560e7c80237a9e5eb471fb560e51305a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:29:55.603580   40549 start.go:360] acquireMachinesLock for kindnet-816358: {Name:mk307ae3583c207a47794987d4930662cf65d417 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 09:29:55.603619   40549 start.go:364] duration metric: took 20.896µs to acquireMachinesLock for "kindnet-816358"
	I1025 09:29:55.603640   40549 start.go:93] Provisioning new machine with config: &{Name:kindnet-816358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:kindnet-816358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:29:55.603722   40549 start.go:125] createHost starting for "" (driver="kvm2")
	I1025 09:29:52.945358   40412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:29:52.945390   40412 machine.go:96] duration metric: took 6.530456421s to provisionDockerMachine
	I1025 09:29:52.945403   40412 start.go:293] postStartSetup for "pause-220312" (driver="kvm2")
	I1025 09:29:52.945415   40412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:29:52.945492   40412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:29:52.950206   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:52.950873   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:52.950921   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:52.951120   40412 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/pause-220312/id_rsa Username:docker}
	I1025 09:29:53.048390   40412 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:29:53.055264   40412 info.go:137] Remote host: Buildroot 2025.02
	I1025 09:29:53.055303   40412 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5973/.minikube/addons for local assets ...
	I1025 09:29:53.055377   40412 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5973/.minikube/files for local assets ...
	I1025 09:29:53.055493   40412 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem -> 98812.pem in /etc/ssl/certs
	I1025 09:29:53.055681   40412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:29:53.073197   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem --> /etc/ssl/certs/98812.pem (1708 bytes)
	I1025 09:29:53.106798   40412 start.go:296] duration metric: took 161.355117ms for postStartSetup
	I1025 09:29:53.106856   40412 fix.go:56] duration metric: took 6.695908107s for fixHost
	I1025 09:29:53.110512   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.111047   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:53.111097   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.111390   40412 main.go:141] libmachine: Using SSH client type: native
	I1025 09:29:53.111692   40412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1025 09:29:53.111708   40412 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 09:29:53.231369   40412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761384593.227712471
	
	I1025 09:29:53.231394   40412 fix.go:216] guest clock: 1761384593.227712471
	I1025 09:29:53.231404   40412 fix.go:229] Guest: 2025-10-25 09:29:53.227712471 +0000 UTC Remote: 2025-10-25 09:29:53.106861508 +0000 UTC m=+6.843347300 (delta=120.850963ms)
	I1025 09:29:53.231456   40412 fix.go:200] guest clock delta is within tolerance: 120.850963ms
	I1025 09:29:53.231466   40412 start.go:83] releasing machines lock for "pause-220312", held for 6.820531996s
	I1025 09:29:53.234859   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.235313   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:53.235346   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.236093   40412 ssh_runner.go:195] Run: cat /version.json
	I1025 09:29:53.236134   40412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:29:53.239460   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.239903   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:53.239933   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.239959   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.240136   40412 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/pause-220312/id_rsa Username:docker}
	I1025 09:29:53.240618   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:53.240651   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.240841   40412 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/pause-220312/id_rsa Username:docker}
	I1025 09:29:53.360875   40412 ssh_runner.go:195] Run: systemctl --version
	I1025 09:29:53.368199   40412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:29:53.524671   40412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:29:53.536824   40412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:29:53.536900   40412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:29:53.550936   40412 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:29:53.550968   40412 start.go:495] detecting cgroup driver to use...
	I1025 09:29:53.551049   40412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:29:53.575474   40412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:29:53.595751   40412 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:29:53.595848   40412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:29:53.616773   40412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:29:53.634871   40412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:29:53.835822   40412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:29:54.042928   40412 docker.go:234] disabling docker service ...
	I1025 09:29:54.043012   40412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:29:54.074937   40412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:29:54.092816   40412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:29:54.300787   40412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:29:54.498366   40412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:29:54.519343   40412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:29:54.615128   40412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:29:54.615217   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.647393   40412 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:29:54.647460   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.674617   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.713988   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.741287   40412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:29:54.759322   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.773742   40412 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.793448   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.818581   40412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:29:54.836610   40412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:29:54.870041   40412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:29:55.236756   40412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:29:56.014495   40412 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:29:56.014573   40412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:29:56.020977   40412 start.go:563] Will wait 60s for crictl version
	I1025 09:29:56.021061   40412 ssh_runner.go:195] Run: which crictl
	I1025 09:29:56.025408   40412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 09:29:56.064944   40412 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 09:29:56.065041   40412 ssh_runner.go:195] Run: crio --version
	I1025 09:29:56.096272   40412 ssh_runner.go:195] Run: crio --version
	I1025 09:29:56.129041   40412 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1025 09:29:56.133423   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:56.134041   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:56.134069   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:56.134339   40412 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1025 09:29:56.139340   40412 kubeadm.go:883] updating cluster {Name:pause-220312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-220312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:29:56.139528   40412 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:29:56.139608   40412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:29:56.199404   40412 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:29:56.199435   40412 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:29:56.199484   40412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:29:56.238111   40412 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:29:56.238136   40412 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:29:56.238143   40412 kubeadm.go:934] updating node { 192.168.61.192 8443 v1.34.1 crio true true} ...
	I1025 09:29:56.238253   40412 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-220312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-220312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:29:56.238352   40412 ssh_runner.go:195] Run: crio config
	I1025 09:29:56.293607   40412 cni.go:84] Creating CNI manager for ""
	I1025 09:29:56.293633   40412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 09:29:56.293651   40412 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:29:56.293672   40412 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.192 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-220312 NodeName:pause-220312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.192"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.192 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:29:56.293785   40412 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.192
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-220312"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.192"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.192"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:29:56.293848   40412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:29:56.306746   40412 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:29:56.306841   40412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:29:56.319853   40412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	W1025 09:29:52.953972   39691 pod_ready.go:104] pod "coredns-66bc5c9577-vxq79" is not "Ready", error: <nil>
	W1025 09:29:55.450161   39691 pod_ready.go:104] pod "coredns-66bc5c9577-vxq79" is not "Ready", error: <nil>
	W1025 09:29:57.450978   39691 pod_ready.go:104] pod "coredns-66bc5c9577-vxq79" is not "Ready", error: <nil>
	I1025 09:29:55.605629   40549 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 09:29:55.605812   40549 start.go:159] libmachine.API.Create for "kindnet-816358" (driver="kvm2")
	I1025 09:29:55.605840   40549 client.go:168] LocalClient.Create starting
	I1025 09:29:55.605891   40549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem
	I1025 09:29:55.605930   40549 main.go:141] libmachine: Decoding PEM data...
	I1025 09:29:55.605945   40549 main.go:141] libmachine: Parsing certificate...
	I1025 09:29:55.606001   40549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem
	I1025 09:29:55.606047   40549 main.go:141] libmachine: Decoding PEM data...
	I1025 09:29:55.606058   40549 main.go:141] libmachine: Parsing certificate...
	I1025 09:29:55.606339   40549 main.go:141] libmachine: creating domain...
	I1025 09:29:55.606355   40549 main.go:141] libmachine: creating network...
	I1025 09:29:55.607971   40549 main.go:141] libmachine: found existing default network
	I1025 09:29:55.608258   40549 main.go:141] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 09:29:55.609159   40549 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9a:8d:f2} reservation:<nil>}
	I1025 09:29:55.609637   40549 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:28:17:ab} reservation:<nil>}
	I1025 09:29:55.610138   40549 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:0d:3c:54} reservation:<nil>}
	I1025 09:29:55.610811   40549 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc06a0}
	I1025 09:29:55.610912   40549 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-kindnet-816358</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 09:29:55.617690   40549 main.go:141] libmachine: creating private network mk-kindnet-816358 192.168.72.0/24...
	I1025 09:29:55.713806   40549 main.go:141] libmachine: private network mk-kindnet-816358 192.168.72.0/24 created
	I1025 09:29:55.714213   40549 main.go:141] libmachine: <network>
	  <name>mk-kindnet-816358</name>
	  <uuid>2334446d-0c9c-408f-8bc5-ffcb6b34c89d</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:e5:19:cd'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 09:29:55.714291   40549 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358 ...
	I1025 09:29:55.714326   40549 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21796-5973/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1025 09:29:55.714338   40549 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 09:29:55.714404   40549 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21796-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21796-5973/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1025 09:29:55.964184   40549 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358/id_rsa...
	I1025 09:29:56.159327   40549 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358/kindnet-816358.rawdisk...
	I1025 09:29:56.159418   40549 main.go:141] libmachine: Writing magic tar header
	I1025 09:29:56.159456   40549 main.go:141] libmachine: Writing SSH key tar header
	I1025 09:29:56.159580   40549 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358 ...
	I1025 09:29:56.159686   40549 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358
	I1025 09:29:56.159721   40549 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358 (perms=drwx------)
	I1025 09:29:56.159743   40549 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973/.minikube/machines
	I1025 09:29:56.159763   40549 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973/.minikube/machines (perms=drwxr-xr-x)
	I1025 09:29:56.159785   40549 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 09:29:56.159801   40549 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973/.minikube (perms=drwxr-xr-x)
	I1025 09:29:56.159814   40549 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973
	I1025 09:29:56.159829   40549 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973 (perms=drwxrwxr-x)
	I1025 09:29:56.159842   40549 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1025 09:29:56.159854   40549 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1025 09:29:56.159871   40549 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1025 09:29:56.159885   40549 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1025 09:29:56.159899   40549 main.go:141] libmachine: checking permissions on dir: /home
	I1025 09:29:56.159915   40549 main.go:141] libmachine: skipping /home - not owner
	I1025 09:29:56.159925   40549 main.go:141] libmachine: defining domain...
	I1025 09:29:56.161679   40549 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>kindnet-816358</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358/kindnet-816358.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-kindnet-816358'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1025 09:29:56.170516   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:c3:a9:9e in network default
	I1025 09:29:56.171370   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:29:56.171404   40549 main.go:141] libmachine: starting domain...
	I1025 09:29:56.171411   40549 main.go:141] libmachine: ensuring networks are active...
	I1025 09:29:56.172416   40549 main.go:141] libmachine: Ensuring network default is active
	I1025 09:29:56.172928   40549 main.go:141] libmachine: Ensuring network mk-kindnet-816358 is active
	I1025 09:29:56.173698   40549 main.go:141] libmachine: getting domain XML...
	I1025 09:29:56.174749   40549 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>kindnet-816358</name>
	  <uuid>ae352e9b-d269-4fa1-b5f7-97de871357aa</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358/kindnet-816358.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:27:65:d9'/>
	      <source network='mk-kindnet-816358'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:c3:a9:9e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1025 09:29:57.635360   40549 main.go:141] libmachine: waiting for domain to start...
	I1025 09:29:57.637188   40549 main.go:141] libmachine: domain is now running
	I1025 09:29:57.637203   40549 main.go:141] libmachine: waiting for IP...
	I1025 09:29:57.638277   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:29:57.638965   40549 main.go:141] libmachine: no network interface addresses found for domain kindnet-816358 (source=lease)
	I1025 09:29:57.638979   40549 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:29:57.639444   40549 main.go:141] libmachine: unable to find current IP address of domain kindnet-816358 in network mk-kindnet-816358 (interfaces detected: [])
	I1025 09:29:57.639492   40549 retry.go:31] will retry after 289.260297ms: waiting for domain to come up
	I1025 09:29:57.930304   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:29:57.931271   40549 main.go:141] libmachine: no network interface addresses found for domain kindnet-816358 (source=lease)
	I1025 09:29:57.931293   40549 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:29:57.931805   40549 main.go:141] libmachine: unable to find current IP address of domain kindnet-816358 in network mk-kindnet-816358 (interfaces detected: [])
	I1025 09:29:57.931850   40549 retry.go:31] will retry after 292.350255ms: waiting for domain to come up
	I1025 09:29:58.225519   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:29:58.226321   40549 main.go:141] libmachine: no network interface addresses found for domain kindnet-816358 (source=lease)
	I1025 09:29:58.226347   40549 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:29:58.226679   40549 main.go:141] libmachine: unable to find current IP address of domain kindnet-816358 in network mk-kindnet-816358 (interfaces detected: [])
	I1025 09:29:58.226712   40549 retry.go:31] will retry after 321.143809ms: waiting for domain to come up
	I1025 09:29:58.549178   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:29:58.550084   40549 main.go:141] libmachine: no network interface addresses found for domain kindnet-816358 (source=lease)
	I1025 09:29:58.550103   40549 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:29:58.550526   40549 main.go:141] libmachine: unable to find current IP address of domain kindnet-816358 in network mk-kindnet-816358 (interfaces detected: [])
	I1025 09:29:58.550559   40549 retry.go:31] will retry after 508.536821ms: waiting for domain to come up
	I1025 09:29:59.060390   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:29:59.061018   40549 main.go:141] libmachine: no network interface addresses found for domain kindnet-816358 (source=lease)
	I1025 09:29:59.061038   40549 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:29:59.061465   40549 main.go:141] libmachine: unable to find current IP address of domain kindnet-816358 in network mk-kindnet-816358 (interfaces detected: [])
	I1025 09:29:59.061499   40549 retry.go:31] will retry after 754.962983ms: waiting for domain to come up
	I1025 09:29:59.818763   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:29:59.819507   40549 main.go:141] libmachine: no network interface addresses found for domain kindnet-816358 (source=lease)
	I1025 09:29:59.819528   40549 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:29:59.819876   40549 main.go:141] libmachine: unable to find current IP address of domain kindnet-816358 in network mk-kindnet-816358 (interfaces detected: [])
	I1025 09:29:59.819910   40549 retry.go:31] will retry after 591.95837ms: waiting for domain to come up
	I1025 09:30:00.413958   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:30:00.414667   40549 main.go:141] libmachine: no network interface addresses found for domain kindnet-816358 (source=lease)
	I1025 09:30:00.414690   40549 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:30:00.415050   40549 main.go:141] libmachine: unable to find current IP address of domain kindnet-816358 in network mk-kindnet-816358 (interfaces detected: [])
	I1025 09:30:00.415109   40549 retry.go:31] will retry after 861.459849ms: waiting for domain to come up
	I1025 09:29:56.342963   40412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:29:56.366632   40412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1025 09:29:56.393174   40412 ssh_runner.go:195] Run: grep 192.168.61.192	control-plane.minikube.internal$ /etc/hosts
	I1025 09:29:56.398476   40412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:29:56.588943   40412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:29:56.609304   40412 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312 for IP: 192.168.61.192
	I1025 09:29:56.609336   40412 certs.go:195] generating shared ca certs ...
	I1025 09:29:56.609358   40412 certs.go:227] acquiring lock for ca certs: {Name:mke8d6ba2f98d813f76972dbfee9daa2e84822df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:29:56.609544   40412 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key
	I1025 09:29:56.609596   40412 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key
	I1025 09:29:56.609606   40412 certs.go:257] generating profile certs ...
	I1025 09:29:56.609696   40412 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/client.key
	I1025 09:29:56.609761   40412 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/apiserver.key.67d2603a
	I1025 09:29:56.609804   40412 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/proxy-client.key
	I1025 09:29:56.609940   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/9881.pem (1338 bytes)
	W1025 09:29:56.609974   40412 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5973/.minikube/certs/9881_empty.pem, impossibly tiny 0 bytes
	I1025 09:29:56.609986   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:29:56.610022   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:29:56.610052   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:29:56.610077   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/key.pem (1679 bytes)
	I1025 09:29:56.610121   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem (1708 bytes)
	I1025 09:29:56.610724   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:29:56.644137   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 09:29:56.689075   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:29:56.781784   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1025 09:29:56.851328   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:29:56.972258   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:29:57.046703   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:29:57.133956   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:29:57.203953   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/certs/9881.pem --> /usr/share/ca-certificates/9881.pem (1338 bytes)
	I1025 09:29:57.271624   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem --> /usr/share/ca-certificates/98812.pem (1708 bytes)
	I1025 09:29:57.332289   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:29:57.389221   40412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:29:57.423056   40412 ssh_runner.go:195] Run: openssl version
	I1025 09:29:57.432964   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9881.pem && ln -fs /usr/share/ca-certificates/9881.pem /etc/ssl/certs/9881.pem"
	I1025 09:29:57.448713   40412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9881.pem
	I1025 09:29:57.460379   40412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/9881.pem
	I1025 09:29:57.460449   40412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9881.pem
	I1025 09:29:57.478498   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9881.pem /etc/ssl/certs/51391683.0"
	I1025 09:29:57.496222   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98812.pem && ln -fs /usr/share/ca-certificates/98812.pem /etc/ssl/certs/98812.pem"
	I1025 09:29:57.515389   40412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98812.pem
	I1025 09:29:57.521863   40412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/98812.pem
	I1025 09:29:57.521945   40412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98812.pem
	I1025 09:29:57.532684   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98812.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:29:57.551523   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:29:57.573217   40412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:29:57.589306   40412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:29:57.589382   40412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:29:57.604861   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:29:57.626796   40412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:29:57.633461   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:29:57.644805   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:29:57.653436   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:29:57.661035   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:29:57.668978   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:29:57.678104   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:29:57.686136   40412 kubeadm.go:400] StartCluster: {Name:pause-220312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-220312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:29:57.686266   40412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:29:57.686344   40412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:29:57.747038   40412 cri.go:89] found id: "07859464f9166fa1d7359a66f8ed9dc5a1487e3cddc919b6855ae3aad361bbf9"
	I1025 09:29:57.747074   40412 cri.go:89] found id: "720238c583854abcd754418c8719916550d6451a5fccbd31e2487ba1cb6319e5"
	I1025 09:29:57.747080   40412 cri.go:89] found id: "45c34007f8cf7266a1b9d58dcbba1b9b782b0649d2a43b84465d34f37bde6b9e"
	I1025 09:29:57.747085   40412 cri.go:89] found id: "c03ec903227da1dce0853648db885c6182b1104bf9ca1362bc3d2a58fdbe0ac0"
	I1025 09:29:57.747089   40412 cri.go:89] found id: "a2578bb7857d5b98c139a5fea6bb84ea8a16422469161555c5ef98aa376dc265"
	I1025 09:29:57.747093   40412 cri.go:89] found id: "89f34eaf4ef2646f3d7486eefea7402bd59f91ef73117f34c41f6b37a07e0749"
	I1025 09:29:57.747097   40412 cri.go:89] found id: "7ea2b3eef94de4b65c75d98f542afa63104d498b22d389cca485c22d95e19a8e"
	I1025 09:29:57.747102   40412 cri.go:89] found id: ""
	I1025 09:29:57.747155   40412 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-220312 -n pause-220312
helpers_test.go:269: (dbg) Run:  kubectl --context pause-220312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-220312 -n pause-220312
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-220312 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-220312 logs -n 25: (1.672827372s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p NoKubernetes-024391 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-024391       │ jenkins │ v1.37.0 │ 25 Oct 25 09:26 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-026829 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ running-upgrade-026829    │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │                     │
	│ stop    │ -p NoKubernetes-024391                                                                                                                                                                                                  │ NoKubernetes-024391       │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ delete  │ -p running-upgrade-026829                                                                                                                                                                                               │ running-upgrade-026829    │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ start   │ -p NoKubernetes-024391 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-024391       │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ start   │ -p cert-expiration-097778 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-097778    │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:28 UTC │
	│ delete  │ -p kubernetes-upgrade-254344                                                                                                                                                                                            │ kubernetes-upgrade-254344 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ start   │ -p cert-options-585228 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-585228       │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:28 UTC │
	│ ssh     │ -p NoKubernetes-024391 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-024391       │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │                     │
	│ delete  │ -p NoKubernetes-024391                                                                                                                                                                                                  │ NoKubernetes-024391       │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ start   │ -p pause-220312 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-220312              │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ force-systemd-flag-811701 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-811701 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ delete  │ -p force-systemd-flag-811701                                                                                                                                                                                            │ force-systemd-flag-811701 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:27 UTC │
	│ start   │ -p stopped-upgrade-196082 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-196082    │ jenkins │ v1.32.0 │ 25 Oct 25 09:27 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ cert-options-585228 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-585228       │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ ssh     │ -p cert-options-585228 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-585228       │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ delete  │ -p cert-options-585228                                                                                                                                                                                                  │ cert-options-585228       │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ start   │ -p auto-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-816358               │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:30 UTC │
	│ stop    │ stopped-upgrade-196082 stop                                                                                                                                                                                             │ stopped-upgrade-196082    │ jenkins │ v1.32.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p stopped-upgrade-196082 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ stopped-upgrade-196082    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p pause-220312 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-220312              │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:30 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-196082 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-196082    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │                     │
	│ delete  │ -p stopped-upgrade-196082                                                                                                                                                                                               │ stopped-upgrade-196082    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p kindnet-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                                                                                  │ kindnet-816358            │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │                     │
	│ ssh     │ -p auto-816358 pgrep -a kubelet                                                                                                                                                                                         │ auto-816358               │ jenkins │ v1.37.0 │ 25 Oct 25 09:30 UTC │ 25 Oct 25 09:30 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:29:55
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:29:55.534982   40549 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:29:55.535309   40549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:29:55.535319   40549 out.go:374] Setting ErrFile to fd 2...
	I1025 09:29:55.535323   40549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:29:55.535559   40549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
	I1025 09:29:55.536087   40549 out.go:368] Setting JSON to false
	I1025 09:29:55.537025   40549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4346,"bootTime":1761380250,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:29:55.537112   40549 start.go:141] virtualization: kvm guest
	I1025 09:29:55.539461   40549 out.go:179] * [kindnet-816358] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:29:55.541232   40549 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:29:55.541220   40549 notify.go:220] Checking for updates...
	I1025 09:29:55.544635   40549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:29:55.546398   40549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	I1025 09:29:55.547898   40549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 09:29:55.549433   40549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:29:55.550822   40549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:29:55.553189   40549 config.go:182] Loaded profile config "auto-816358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:29:55.553391   40549 config.go:182] Loaded profile config "cert-expiration-097778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:29:55.553602   40549 config.go:182] Loaded profile config "pause-220312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:29:55.553765   40549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:29:55.597207   40549 out.go:179] * Using the kvm2 driver based on user configuration
	I1025 09:29:55.598604   40549 start.go:305] selected driver: kvm2
	I1025 09:29:55.598623   40549 start.go:925] validating driver "kvm2" against <nil>
	I1025 09:29:55.598635   40549 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:29:55.599499   40549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:29:55.599742   40549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:29:55.599769   40549 cni.go:84] Creating CNI manager for "kindnet"
	I1025 09:29:55.599775   40549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:29:55.599822   40549 start.go:349] cluster config:
	{Name:kindnet-816358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-816358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:29:55.599947   40549 iso.go:125] acquiring lock: {Name:mk56ae07ef3e2fe29ebca77d84768cf173c5b3d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:29:55.601747   40549 out.go:179] * Starting "kindnet-816358" primary control-plane node in "kindnet-816358" cluster
	I1025 09:29:55.603008   40549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:29:55.603060   40549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1025 09:29:55.603073   40549 cache.go:58] Caching tarball of preloaded images
	I1025 09:29:55.603174   40549 preload.go:233] Found /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 09:29:55.603189   40549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:29:55.603353   40549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/config.json ...
	I1025 09:29:55.603382   40549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/config.json: {Name:mkab1a2560e7c80237a9e5eb471fb560e51305a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:29:55.603580   40549 start.go:360] acquireMachinesLock for kindnet-816358: {Name:mk307ae3583c207a47794987d4930662cf65d417 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 09:29:55.603619   40549 start.go:364] duration metric: took 20.896µs to acquireMachinesLock for "kindnet-816358"
	I1025 09:29:55.603640   40549 start.go:93] Provisioning new machine with config: &{Name:kindnet-816358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:kindnet-816358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:29:55.603722   40549 start.go:125] createHost starting for "" (driver="kvm2")
	I1025 09:29:52.945358   40412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:29:52.945390   40412 machine.go:96] duration metric: took 6.530456421s to provisionDockerMachine
	I1025 09:29:52.945403   40412 start.go:293] postStartSetup for "pause-220312" (driver="kvm2")
	I1025 09:29:52.945415   40412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:29:52.945492   40412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:29:52.950206   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:52.950873   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:52.950921   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:52.951120   40412 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/pause-220312/id_rsa Username:docker}
	I1025 09:29:53.048390   40412 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:29:53.055264   40412 info.go:137] Remote host: Buildroot 2025.02
	I1025 09:29:53.055303   40412 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5973/.minikube/addons for local assets ...
	I1025 09:29:53.055377   40412 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-5973/.minikube/files for local assets ...
	I1025 09:29:53.055493   40412 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem -> 98812.pem in /etc/ssl/certs
	I1025 09:29:53.055681   40412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:29:53.073197   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem --> /etc/ssl/certs/98812.pem (1708 bytes)
	I1025 09:29:53.106798   40412 start.go:296] duration metric: took 161.355117ms for postStartSetup
	I1025 09:29:53.106856   40412 fix.go:56] duration metric: took 6.695908107s for fixHost
	I1025 09:29:53.110512   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.111047   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:53.111097   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.111390   40412 main.go:141] libmachine: Using SSH client type: native
	I1025 09:29:53.111692   40412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I1025 09:29:53.111708   40412 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 09:29:53.231369   40412 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761384593.227712471
	
	I1025 09:29:53.231394   40412 fix.go:216] guest clock: 1761384593.227712471
	I1025 09:29:53.231404   40412 fix.go:229] Guest: 2025-10-25 09:29:53.227712471 +0000 UTC Remote: 2025-10-25 09:29:53.106861508 +0000 UTC m=+6.843347300 (delta=120.850963ms)
	I1025 09:29:53.231456   40412 fix.go:200] guest clock delta is within tolerance: 120.850963ms
	I1025 09:29:53.231466   40412 start.go:83] releasing machines lock for "pause-220312", held for 6.820531996s
	I1025 09:29:53.234859   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.235313   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:53.235346   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.236093   40412 ssh_runner.go:195] Run: cat /version.json
	I1025 09:29:53.236134   40412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:29:53.239460   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.239903   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:53.239933   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.239959   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.240136   40412 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/pause-220312/id_rsa Username:docker}
	I1025 09:29:53.240618   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:53.240651   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:53.240841   40412 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/pause-220312/id_rsa Username:docker}
	I1025 09:29:53.360875   40412 ssh_runner.go:195] Run: systemctl --version
	I1025 09:29:53.368199   40412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:29:53.524671   40412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:29:53.536824   40412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:29:53.536900   40412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:29:53.550936   40412 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:29:53.550968   40412 start.go:495] detecting cgroup driver to use...
	I1025 09:29:53.551049   40412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:29:53.575474   40412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:29:53.595751   40412 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:29:53.595848   40412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:29:53.616773   40412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:29:53.634871   40412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:29:53.835822   40412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:29:54.042928   40412 docker.go:234] disabling docker service ...
	I1025 09:29:54.043012   40412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:29:54.074937   40412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:29:54.092816   40412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:29:54.300787   40412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:29:54.498366   40412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:29:54.519343   40412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:29:54.615128   40412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:29:54.615217   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.647393   40412 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:29:54.647460   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.674617   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.713988   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.741287   40412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:29:54.759322   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.773742   40412 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.793448   40412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:29:54.818581   40412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:29:54.836610   40412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:29:54.870041   40412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:29:55.236756   40412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:29:56.014495   40412 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:29:56.014573   40412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:29:56.020977   40412 start.go:563] Will wait 60s for crictl version
	I1025 09:29:56.021061   40412 ssh_runner.go:195] Run: which crictl
	I1025 09:29:56.025408   40412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 09:29:56.064944   40412 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 09:29:56.065041   40412 ssh_runner.go:195] Run: crio --version
	I1025 09:29:56.096272   40412 ssh_runner.go:195] Run: crio --version
	I1025 09:29:56.129041   40412 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1025 09:29:56.133423   40412 main.go:141] libmachine: domain pause-220312 has defined MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:56.134041   40412 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:ab:5c", ip: ""} in network mk-pause-220312: {Iface:virbr3 ExpiryTime:2025-10-25 10:28:30 +0000 UTC Type:0 Mac:52:54:00:2b:ab:5c Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:pause-220312 Clientid:01:52:54:00:2b:ab:5c}
	I1025 09:29:56.134069   40412 main.go:141] libmachine: domain pause-220312 has defined IP address 192.168.61.192 and MAC address 52:54:00:2b:ab:5c in network mk-pause-220312
	I1025 09:29:56.134339   40412 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1025 09:29:56.139340   40412 kubeadm.go:883] updating cluster {Name:pause-220312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-220312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:29:56.139528   40412 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:29:56.139608   40412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:29:56.199404   40412 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:29:56.199435   40412 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:29:56.199484   40412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:29:56.238111   40412 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:29:56.238136   40412 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:29:56.238143   40412 kubeadm.go:934] updating node { 192.168.61.192 8443 v1.34.1 crio true true} ...
	I1025 09:29:56.238253   40412 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-220312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-220312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:29:56.238352   40412 ssh_runner.go:195] Run: crio config
	I1025 09:29:56.293607   40412 cni.go:84] Creating CNI manager for ""
	I1025 09:29:56.293633   40412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 09:29:56.293651   40412 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:29:56.293672   40412 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.192 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-220312 NodeName:pause-220312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.192"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.192 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:29:56.293785   40412 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.192
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-220312"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.192"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.192"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:29:56.293848   40412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:29:56.306746   40412 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:29:56.306841   40412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:29:56.319853   40412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	W1025 09:29:52.953972   39691 pod_ready.go:104] pod "coredns-66bc5c9577-vxq79" is not "Ready", error: <nil>
	W1025 09:29:55.450161   39691 pod_ready.go:104] pod "coredns-66bc5c9577-vxq79" is not "Ready", error: <nil>
	W1025 09:29:57.450978   39691 pod_ready.go:104] pod "coredns-66bc5c9577-vxq79" is not "Ready", error: <nil>
	I1025 09:29:55.605629   40549 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 09:29:55.605812   40549 start.go:159] libmachine.API.Create for "kindnet-816358" (driver="kvm2")
	I1025 09:29:55.605840   40549 client.go:168] LocalClient.Create starting
	I1025 09:29:55.605891   40549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem
	I1025 09:29:55.605930   40549 main.go:141] libmachine: Decoding PEM data...
	I1025 09:29:55.605945   40549 main.go:141] libmachine: Parsing certificate...
	I1025 09:29:55.606001   40549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem
	I1025 09:29:55.606047   40549 main.go:141] libmachine: Decoding PEM data...
	I1025 09:29:55.606058   40549 main.go:141] libmachine: Parsing certificate...
	I1025 09:29:55.606339   40549 main.go:141] libmachine: creating domain...
	I1025 09:29:55.606355   40549 main.go:141] libmachine: creating network...
	I1025 09:29:55.607971   40549 main.go:141] libmachine: found existing default network
	I1025 09:29:55.608258   40549 main.go:141] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 09:29:55.609159   40549 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9a:8d:f2} reservation:<nil>}
	I1025 09:29:55.609637   40549 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:28:17:ab} reservation:<nil>}
	I1025 09:29:55.610138   40549 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:0d:3c:54} reservation:<nil>}
	I1025 09:29:55.610811   40549 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc06a0}
	I1025 09:29:55.610912   40549 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-kindnet-816358</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 09:29:55.617690   40549 main.go:141] libmachine: creating private network mk-kindnet-816358 192.168.72.0/24...
	I1025 09:29:55.713806   40549 main.go:141] libmachine: private network mk-kindnet-816358 192.168.72.0/24 created
	I1025 09:29:55.714213   40549 main.go:141] libmachine: <network>
	  <name>mk-kindnet-816358</name>
	  <uuid>2334446d-0c9c-408f-8bc5-ffcb6b34c89d</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:e5:19:cd'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1025 09:29:55.714291   40549 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358 ...
	I1025 09:29:55.714326   40549 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21796-5973/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1025 09:29:55.714338   40549 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 09:29:55.714404   40549 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21796-5973/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21796-5973/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1025 09:29:55.964184   40549 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358/id_rsa...
	I1025 09:29:56.159327   40549 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358/kindnet-816358.rawdisk...
	I1025 09:29:56.159418   40549 main.go:141] libmachine: Writing magic tar header
	I1025 09:29:56.159456   40549 main.go:141] libmachine: Writing SSH key tar header
	I1025 09:29:56.159580   40549 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358 ...
	I1025 09:29:56.159686   40549 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358
	I1025 09:29:56.159721   40549 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358 (perms=drwx------)
	I1025 09:29:56.159743   40549 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973/.minikube/machines
	I1025 09:29:56.159763   40549 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973/.minikube/machines (perms=drwxr-xr-x)
	I1025 09:29:56.159785   40549 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 09:29:56.159801   40549 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973/.minikube (perms=drwxr-xr-x)
	I1025 09:29:56.159814   40549 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21796-5973
	I1025 09:29:56.159829   40549 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21796-5973 (perms=drwxrwxr-x)
	I1025 09:29:56.159842   40549 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1025 09:29:56.159854   40549 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1025 09:29:56.159871   40549 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1025 09:29:56.159885   40549 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1025 09:29:56.159899   40549 main.go:141] libmachine: checking permissions on dir: /home
	I1025 09:29:56.159915   40549 main.go:141] libmachine: skipping /home - not owner
	I1025 09:29:56.159925   40549 main.go:141] libmachine: defining domain...
	I1025 09:29:56.161679   40549 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>kindnet-816358</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358/kindnet-816358.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-kindnet-816358'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1025 09:29:56.170516   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:c3:a9:9e in network default
	I1025 09:29:56.171370   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:29:56.171404   40549 main.go:141] libmachine: starting domain...
	I1025 09:29:56.171411   40549 main.go:141] libmachine: ensuring networks are active...
	I1025 09:29:56.172416   40549 main.go:141] libmachine: Ensuring network default is active
	I1025 09:29:56.172928   40549 main.go:141] libmachine: Ensuring network mk-kindnet-816358 is active
	I1025 09:29:56.173698   40549 main.go:141] libmachine: getting domain XML...
	I1025 09:29:56.174749   40549 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>kindnet-816358</name>
	  <uuid>ae352e9b-d269-4fa1-b5f7-97de871357aa</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21796-5973/.minikube/machines/kindnet-816358/kindnet-816358.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:27:65:d9'/>
	      <source network='mk-kindnet-816358'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:c3:a9:9e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1025 09:29:57.635360   40549 main.go:141] libmachine: waiting for domain to start...
	I1025 09:29:57.637188   40549 main.go:141] libmachine: domain is now running
	I1025 09:29:57.637203   40549 main.go:141] libmachine: waiting for IP...
	I1025 09:29:57.638277   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:29:57.638965   40549 main.go:141] libmachine: no network interface addresses found for domain kindnet-816358 (source=lease)
	I1025 09:29:57.638979   40549 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:29:57.639444   40549 main.go:141] libmachine: unable to find current IP address of domain kindnet-816358 in network mk-kindnet-816358 (interfaces detected: [])
	I1025 09:29:57.639492   40549 retry.go:31] will retry after 289.260297ms: waiting for domain to come up
	I1025 09:29:57.930304   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:29:57.931271   40549 main.go:141] libmachine: no network interface addresses found for domain kindnet-816358 (source=lease)
	I1025 09:29:57.931293   40549 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:29:57.931805   40549 main.go:141] libmachine: unable to find current IP address of domain kindnet-816358 in network mk-kindnet-816358 (interfaces detected: [])
	I1025 09:29:57.931850   40549 retry.go:31] will retry after 292.350255ms: waiting for domain to come up
	I1025 09:29:58.225519   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:29:58.226321   40549 main.go:141] libmachine: no network interface addresses found for domain kindnet-816358 (source=lease)
	I1025 09:29:58.226347   40549 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:29:58.226679   40549 main.go:141] libmachine: unable to find current IP address of domain kindnet-816358 in network mk-kindnet-816358 (interfaces detected: [])
	I1025 09:29:58.226712   40549 retry.go:31] will retry after 321.143809ms: waiting for domain to come up
	I1025 09:29:58.549178   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:29:58.550084   40549 main.go:141] libmachine: no network interface addresses found for domain kindnet-816358 (source=lease)
	I1025 09:29:58.550103   40549 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:29:58.550526   40549 main.go:141] libmachine: unable to find current IP address of domain kindnet-816358 in network mk-kindnet-816358 (interfaces detected: [])
	I1025 09:29:58.550559   40549 retry.go:31] will retry after 508.536821ms: waiting for domain to come up
	I1025 09:29:59.060390   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:29:59.061018   40549 main.go:141] libmachine: no network interface addresses found for domain kindnet-816358 (source=lease)
	I1025 09:29:59.061038   40549 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:29:59.061465   40549 main.go:141] libmachine: unable to find current IP address of domain kindnet-816358 in network mk-kindnet-816358 (interfaces detected: [])
	I1025 09:29:59.061499   40549 retry.go:31] will retry after 754.962983ms: waiting for domain to come up
	I1025 09:29:59.818763   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:29:59.819507   40549 main.go:141] libmachine: no network interface addresses found for domain kindnet-816358 (source=lease)
	I1025 09:29:59.819528   40549 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:29:59.819876   40549 main.go:141] libmachine: unable to find current IP address of domain kindnet-816358 in network mk-kindnet-816358 (interfaces detected: [])
	I1025 09:29:59.819910   40549 retry.go:31] will retry after 591.95837ms: waiting for domain to come up
	I1025 09:30:00.413958   40549 main.go:141] libmachine: domain kindnet-816358 has defined MAC address 52:54:00:27:65:d9 in network mk-kindnet-816358
	I1025 09:30:00.414667   40549 main.go:141] libmachine: no network interface addresses found for domain kindnet-816358 (source=lease)
	I1025 09:30:00.414690   40549 main.go:141] libmachine: trying to list again with source=arp
	I1025 09:30:00.415050   40549 main.go:141] libmachine: unable to find current IP address of domain kindnet-816358 in network mk-kindnet-816358 (interfaces detected: [])
	I1025 09:30:00.415109   40549 retry.go:31] will retry after 861.459849ms: waiting for domain to come up
	I1025 09:29:56.342963   40412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:29:56.366632   40412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1025 09:29:56.393174   40412 ssh_runner.go:195] Run: grep 192.168.61.192	control-plane.minikube.internal$ /etc/hosts
	I1025 09:29:56.398476   40412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:29:56.588943   40412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:29:56.609304   40412 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312 for IP: 192.168.61.192
	I1025 09:29:56.609336   40412 certs.go:195] generating shared ca certs ...
	I1025 09:29:56.609358   40412 certs.go:227] acquiring lock for ca certs: {Name:mke8d6ba2f98d813f76972dbfee9daa2e84822df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:29:56.609544   40412 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key
	I1025 09:29:56.609596   40412 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key
	I1025 09:29:56.609606   40412 certs.go:257] generating profile certs ...
	I1025 09:29:56.609696   40412 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/client.key
	I1025 09:29:56.609761   40412 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/apiserver.key.67d2603a
	I1025 09:29:56.609804   40412 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/proxy-client.key
	I1025 09:29:56.609940   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/9881.pem (1338 bytes)
	W1025 09:29:56.609974   40412 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-5973/.minikube/certs/9881_empty.pem, impossibly tiny 0 bytes
	I1025 09:29:56.609986   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:29:56.610022   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:29:56.610052   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:29:56.610077   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/certs/key.pem (1679 bytes)
	I1025 09:29:56.610121   40412 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem (1708 bytes)
	I1025 09:29:56.610724   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:29:56.644137   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 09:29:56.689075   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:29:56.781784   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1025 09:29:56.851328   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:29:56.972258   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:29:57.046703   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:29:57.133956   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/pause-220312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:29:57.203953   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/certs/9881.pem --> /usr/share/ca-certificates/9881.pem (1338 bytes)
	I1025 09:29:57.271624   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/ssl/certs/98812.pem --> /usr/share/ca-certificates/98812.pem (1708 bytes)
	I1025 09:29:57.332289   40412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:29:57.389221   40412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:29:57.423056   40412 ssh_runner.go:195] Run: openssl version
	I1025 09:29:57.432964   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9881.pem && ln -fs /usr/share/ca-certificates/9881.pem /etc/ssl/certs/9881.pem"
	I1025 09:29:57.448713   40412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9881.pem
	I1025 09:29:57.460379   40412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/9881.pem
	I1025 09:29:57.460449   40412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9881.pem
	I1025 09:29:57.478498   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9881.pem /etc/ssl/certs/51391683.0"
	I1025 09:29:57.496222   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98812.pem && ln -fs /usr/share/ca-certificates/98812.pem /etc/ssl/certs/98812.pem"
	I1025 09:29:57.515389   40412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98812.pem
	I1025 09:29:57.521863   40412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/98812.pem
	I1025 09:29:57.521945   40412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98812.pem
	I1025 09:29:57.532684   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98812.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:29:57.551523   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:29:57.573217   40412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:29:57.589306   40412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:29:57.589382   40412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:29:57.604861   40412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:29:57.626796   40412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:29:57.633461   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:29:57.644805   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:29:57.653436   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:29:57.661035   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:29:57.668978   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:29:57.678104   40412 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:29:57.686136   40412 kubeadm.go:400] StartCluster: {Name:pause-220312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-220312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:29:57.686266   40412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:29:57.686344   40412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:29:57.747038   40412 cri.go:89] found id: "07859464f9166fa1d7359a66f8ed9dc5a1487e3cddc919b6855ae3aad361bbf9"
	I1025 09:29:57.747074   40412 cri.go:89] found id: "720238c583854abcd754418c8719916550d6451a5fccbd31e2487ba1cb6319e5"
	I1025 09:29:57.747080   40412 cri.go:89] found id: "45c34007f8cf7266a1b9d58dcbba1b9b782b0649d2a43b84465d34f37bde6b9e"
	I1025 09:29:57.747085   40412 cri.go:89] found id: "c03ec903227da1dce0853648db885c6182b1104bf9ca1362bc3d2a58fdbe0ac0"
	I1025 09:29:57.747089   40412 cri.go:89] found id: "a2578bb7857d5b98c139a5fea6bb84ea8a16422469161555c5ef98aa376dc265"
	I1025 09:29:57.747093   40412 cri.go:89] found id: "89f34eaf4ef2646f3d7486eefea7402bd59f91ef73117f34c41f6b37a07e0749"
	I1025 09:29:57.747097   40412 cri.go:89] found id: "7ea2b3eef94de4b65c75d98f542afa63104d498b22d389cca485c22d95e19a8e"
	I1025 09:29:57.747102   40412 cri.go:89] found id: ""
	I1025 09:29:57.747155   40412 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-220312 -n pause-220312
helpers_test.go:269: (dbg) Run:  kubectl --context pause-220312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (48.77s)

                                                
                                    

Test pass (280/323)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.95
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 3.83
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.65
22 TestOffline 82.18
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 151.51
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 11.53
35 TestAddons/parallel/Registry 18.34
36 TestAddons/parallel/RegistryCreds 0.67
38 TestAddons/parallel/InspektorGadget 6.32
39 TestAddons/parallel/MetricsServer 5.85
41 TestAddons/parallel/CSI 63.91
42 TestAddons/parallel/Headlamp 22.99
43 TestAddons/parallel/CloudSpanner 6.7
44 TestAddons/parallel/LocalPath 57.71
45 TestAddons/parallel/NvidiaDevicePlugin 6.75
46 TestAddons/parallel/Yakd 10.89
48 TestAddons/StoppedEnableDisable 78.81
49 TestCertOptions 83.36
50 TestCertExpiration 302.59
52 TestForceSystemdFlag 49.52
53 TestForceSystemdEnv 55.89
58 TestErrorSpam/setup 37.67
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.65
61 TestErrorSpam/pause 1.59
62 TestErrorSpam/unpause 1.84
63 TestErrorSpam/stop 4.31
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 76.26
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 32.72
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.23
75 TestFunctional/serial/CacheCmd/cache/add_local 2
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 33.6
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.55
86 TestFunctional/serial/LogsFileCmd 1.51
87 TestFunctional/serial/InvalidService 3.81
89 TestFunctional/parallel/ConfigCmd 0.42
90 TestFunctional/parallel/DashboardCmd 31.79
91 TestFunctional/parallel/DryRun 0.25
92 TestFunctional/parallel/InternationalLanguage 0.13
93 TestFunctional/parallel/StatusCmd 0.73
97 TestFunctional/parallel/ServiceCmdConnect 32.53
98 TestFunctional/parallel/AddonsCmd 0.38
99 TestFunctional/parallel/PersistentVolumeClaim 50.96
101 TestFunctional/parallel/SSHCmd 0.34
102 TestFunctional/parallel/CpCmd 1.14
103 TestFunctional/parallel/MySQL 26.52
104 TestFunctional/parallel/FileSync 0.17
105 TestFunctional/parallel/CertSync 1.02
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
113 TestFunctional/parallel/License 0.28
114 TestFunctional/parallel/ServiceCmd/DeployApp 8.2
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.58
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.17
122 TestFunctional/parallel/ImageCommands/Setup 1.51
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
127 TestFunctional/parallel/MountCmd/any-port 10.21
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.59
129 TestFunctional/parallel/ProfileCmd/profile_list 0.33
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.5
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.77
137 TestFunctional/parallel/ServiceCmd/List 0.25
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.25
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.26
149 TestFunctional/parallel/ServiceCmd/Format 0.27
150 TestFunctional/parallel/ServiceCmd/URL 0.27
151 TestFunctional/parallel/MountCmd/specific-port 1.4
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.44
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 251.64
161 TestMultiControlPlane/serial/DeployApp 7.82
162 TestMultiControlPlane/serial/PingHostFromPods 1.32
163 TestMultiControlPlane/serial/AddWorkerNode 47.61
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.7
166 TestMultiControlPlane/serial/CopyFile 10.95
167 TestMultiControlPlane/serial/StopSecondaryNode 85.11
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.53
169 TestMultiControlPlane/serial/RestartSecondaryNode 44.71
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.75
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 363.54
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.51
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
174 TestMultiControlPlane/serial/StopCluster 244.13
175 TestMultiControlPlane/serial/RestartCluster 107.12
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
177 TestMultiControlPlane/serial/AddSecondaryNode 75.49
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.7
182 TestJSONOutput/start/Command 73.96
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.72
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.65
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 6.93
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.24
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 82.57
214 TestMountStart/serial/StartWithMountFirst 23.55
215 TestMountStart/serial/VerifyMountFirst 0.3
216 TestMountStart/serial/StartWithMountSecond 22.86
217 TestMountStart/serial/VerifyMountSecond 0.3
218 TestMountStart/serial/DeleteFirst 0.7
219 TestMountStart/serial/VerifyMountPostDelete 0.31
220 TestMountStart/serial/Stop 1.3
221 TestMountStart/serial/RestartStopped 18.02
222 TestMountStart/serial/VerifyMountPostStop 0.3
225 TestMultiNode/serial/FreshStart2Nodes 128.52
226 TestMultiNode/serial/DeployApp2Nodes 6.5
227 TestMultiNode/serial/PingHostFrom2Pods 0.84
228 TestMultiNode/serial/AddNode 43.53
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.46
231 TestMultiNode/serial/CopyFile 5.99
232 TestMultiNode/serial/StopNode 2.18
233 TestMultiNode/serial/StartAfterStop 41.72
234 TestMultiNode/serial/RestartKeepsNodes 286.32
235 TestMultiNode/serial/DeleteNode 2.58
236 TestMultiNode/serial/StopMultiNode 175.53
237 TestMultiNode/serial/RestartMultiNode 115.39
238 TestMultiNode/serial/ValidateNameConflict 40.67
245 TestScheduledStopUnix 110.19
249 TestRunningBinaryUpgrade 167.62
251 TestKubernetesUpgrade 176.92
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
255 TestNoKubernetes/serial/StartWithK8s 78.94
256 TestNoKubernetes/serial/StartWithStopK8s 31.12
264 TestNetworkPlugins/group/false 4.23
268 TestNoKubernetes/serial/Start 45.66
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
270 TestNoKubernetes/serial/ProfileList 9.85
271 TestNoKubernetes/serial/Stop 1.4
272 TestNoKubernetes/serial/StartNoArgs 30.51
273 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
282 TestPause/serial/Start 130.45
283 TestStoppedBinaryUpgrade/Setup 0.49
284 TestStoppedBinaryUpgrade/Upgrade 133.8
285 TestNetworkPlugins/group/auto/Start 111.14
287 TestStoppedBinaryUpgrade/MinikubeLogs 1.06
288 TestNetworkPlugins/group/kindnet/Start 93.89
289 TestNetworkPlugins/group/auto/KubeletFlags 0.18
290 TestNetworkPlugins/group/auto/NetCatPod 11.28
291 TestNetworkPlugins/group/calico/Start 86.93
292 TestNetworkPlugins/group/auto/DNS 0.17
293 TestNetworkPlugins/group/auto/Localhost 0.14
294 TestNetworkPlugins/group/auto/HairPin 0.16
295 TestNetworkPlugins/group/custom-flannel/Start 75.53
296 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
297 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
298 TestNetworkPlugins/group/kindnet/NetCatPod 12.3
299 TestNetworkPlugins/group/kindnet/DNS 0.15
300 TestNetworkPlugins/group/kindnet/Localhost 0.14
301 TestNetworkPlugins/group/kindnet/HairPin 0.16
302 TestNetworkPlugins/group/calico/ControllerPod 6.01
303 TestNetworkPlugins/group/bridge/Start 92.13
304 TestNetworkPlugins/group/flannel/Start 92.78
305 TestNetworkPlugins/group/calico/KubeletFlags 0.18
306 TestNetworkPlugins/group/calico/NetCatPod 12.25
307 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.19
308 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.29
309 TestNetworkPlugins/group/calico/DNS 0.29
310 TestNetworkPlugins/group/calico/Localhost 0.14
311 TestNetworkPlugins/group/calico/HairPin 0.12
312 TestNetworkPlugins/group/custom-flannel/DNS 0.16
313 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
314 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
315 TestNetworkPlugins/group/enable-default-cni/Start 86.72
317 TestStartStop/group/old-k8s-version/serial/FirstStart 120.06
318 TestNetworkPlugins/group/bridge/KubeletFlags 0.18
319 TestNetworkPlugins/group/bridge/NetCatPod 11.34
320 TestNetworkPlugins/group/flannel/ControllerPod 6.01
321 TestNetworkPlugins/group/flannel/KubeletFlags 0.18
322 TestNetworkPlugins/group/flannel/NetCatPod 11.24
323 TestNetworkPlugins/group/bridge/DNS 0.17
324 TestNetworkPlugins/group/bridge/Localhost 0.15
325 TestNetworkPlugins/group/bridge/HairPin 0.17
326 TestNetworkPlugins/group/flannel/DNS 0.25
327 TestNetworkPlugins/group/flannel/Localhost 0.14
328 TestNetworkPlugins/group/flannel/HairPin 0.15
330 TestStartStop/group/no-preload/serial/FirstStart 76.07
331 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
332 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.26
334 TestStartStop/group/embed-certs/serial/FirstStart 102.55
335 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
339 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 96.22
340 TestStartStop/group/old-k8s-version/serial/DeployApp 13.12
341 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.79
342 TestStartStop/group/old-k8s-version/serial/Stop 86.32
343 TestStartStop/group/no-preload/serial/DeployApp 10.33
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
345 TestStartStop/group/no-preload/serial/Stop 85.81
346 TestStartStop/group/embed-certs/serial/DeployApp 11.28
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
348 TestStartStop/group/embed-certs/serial/Stop 83.89
349 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.29
350 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
351 TestStartStop/group/old-k8s-version/serial/SecondStart 44.32
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 83.38
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.14
355 TestStartStop/group/no-preload/serial/SecondStart 50.53
356 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
357 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
358 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
359 TestStartStop/group/old-k8s-version/serial/Pause 2.69
361 TestStartStop/group/newest-cni/serial/FirstStart 49.08
362 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
363 TestStartStop/group/embed-certs/serial/SecondStart 62.41
364 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
366 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 67.06
367 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.13
368 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
369 TestStartStop/group/no-preload/serial/Pause 3.4
370 TestStartStop/group/newest-cni/serial/DeployApp 0
371 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.34
372 TestStartStop/group/newest-cni/serial/Stop 10.36
373 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
374 TestStartStop/group/newest-cni/serial/SecondStart 37.49
375 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.01
376 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
377 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.38
378 TestStartStop/group/embed-certs/serial/Pause 3.72
379 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11
380 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
381 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
384 TestStartStop/group/newest-cni/serial/Pause 2.72
385 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
386 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.5
x
+
TestDownloadOnly/v1.28.0/json-events (6.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-362587 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-362587 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.948816128s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1025 08:29:35.020552    9881 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1025 08:29:35.020632    9881 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-362587
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-362587: exit status 85 (78.47394ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-362587 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-362587 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:29:28
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:29:28.136335    9893 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:29:28.136637    9893 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:28.136649    9893 out.go:374] Setting ErrFile to fd 2...
	I1025 08:29:28.136655    9893 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:28.136889    9893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
	W1025 08:29:28.137032    9893 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21796-5973/.minikube/config/config.json: open /home/jenkins/minikube-integration/21796-5973/.minikube/config/config.json: no such file or directory
	I1025 08:29:28.137573    9893 out.go:368] Setting JSON to true
	I1025 08:29:28.138570    9893 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":718,"bootTime":1761380250,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:29:28.138667    9893 start.go:141] virtualization: kvm guest
	I1025 08:29:28.141402    9893 out.go:99] [download-only-362587] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1025 08:29:28.141589    9893 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 08:29:28.141599    9893 notify.go:220] Checking for updates...
	I1025 08:29:28.143518    9893 out.go:171] MINIKUBE_LOCATION=21796
	I1025 08:29:28.145307    9893 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:29:28.147385    9893 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	I1025 08:29:28.149027    9893 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 08:29:28.150810    9893 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 08:29:28.154304    9893 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 08:29:28.154642    9893 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:29:28.717287    9893 out.go:99] Using the kvm2 driver based on user configuration
	I1025 08:29:28.717331    9893 start.go:305] selected driver: kvm2
	I1025 08:29:28.717341    9893 start.go:925] validating driver "kvm2" against <nil>
	I1025 08:29:28.717673    9893 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:29:28.718198    9893 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1025 08:29:28.718389    9893 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 08:29:28.718416    9893 cni.go:84] Creating CNI manager for ""
	I1025 08:29:28.718470    9893 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 08:29:28.718481    9893 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 08:29:28.718532    9893 start.go:349] cluster config:
	{Name:download-only-362587 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-362587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:29:28.718745    9893 iso.go:125] acquiring lock: {Name:mk56ae07ef3e2fe29ebca77d84768cf173c5b3d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 08:29:28.720668    9893 out.go:99] Downloading VM boot image ...
	I1025 08:29:28.720712    9893 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21796-5973/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1025 08:29:31.654003    9893 out.go:99] Starting "download-only-362587" primary control-plane node in "download-only-362587" cluster
	I1025 08:29:31.654028    9893 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 08:29:31.671232    9893 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1025 08:29:31.671276    9893 cache.go:58] Caching tarball of preloaded images
	I1025 08:29:31.671434    9893 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 08:29:31.673438    9893 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1025 08:29:31.673465    9893 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1025 08:29:31.696347    9893 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1025 08:29:31.696492    9893 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-362587 host does not exist
	  To start a cluster, run: "minikube start -p download-only-362587"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-362587
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-411797 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-411797 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.833157628s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1025 08:29:39.248956    9881 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1025 08:29:39.248998    9881 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-5973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-411797
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-411797: exit status 85 (80.252879ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-362587 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-362587 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-362587                                                                                                                                                 │ download-only-362587 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-411797 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-411797 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:29:35
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:29:35.472531   10084 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:29:35.472794   10084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:35.472805   10084 out.go:374] Setting ErrFile to fd 2...
	I1025 08:29:35.472810   10084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:35.473059   10084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
	I1025 08:29:35.473601   10084 out.go:368] Setting JSON to true
	I1025 08:29:35.474452   10084 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":726,"bootTime":1761380250,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:29:35.474549   10084 start.go:141] virtualization: kvm guest
	I1025 08:29:35.476694   10084 out.go:99] [download-only-411797] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 08:29:35.476885   10084 notify.go:220] Checking for updates...
	I1025 08:29:35.478556   10084 out.go:171] MINIKUBE_LOCATION=21796
	I1025 08:29:35.480433   10084 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:29:35.481986   10084 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	I1025 08:29:35.483592   10084 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 08:29:35.484986   10084 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-411797 host does not exist
	  To start a cluster, run: "minikube start -p download-only-411797"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-411797
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1025 08:29:39.921864    9881 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-631240 --alsologtostderr --binary-mirror http://127.0.0.1:38089 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-631240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-631240
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (82.18s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-004545 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-004545 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m21.20531422s)
helpers_test.go:175: Cleaning up "offline-crio-004545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-004545
--- PASS: TestOffline (82.18s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-631036
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-631036: exit status 85 (66.401339ms)

                                                
                                                
-- stdout --
	* Profile "addons-631036" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-631036"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-631036
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-631036: exit status 85 (69.729232ms)

                                                
                                                
-- stdout --
	* Profile "addons-631036" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-631036"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (151.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-631036 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-631036 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m31.508824566s)
--- PASS: TestAddons/Setup (151.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-631036 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-631036 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-631036 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-631036 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8f2f6f1d-47e1-4920-87aa-ea653b62155e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8f2f6f1d-47e1-4920-87aa-ea653b62155e] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004481605s
addons_test.go:694: (dbg) Run:  kubectl --context addons-631036 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-631036 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-631036 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.987576ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-h2dlk" [deafd51c-1def-42f4-bf1d-433def2f97c8] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004407812s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-lfzv8" [44090c69-a71c-43ba-9342-a65d7cdcbea7] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.018211104s
addons_test.go:392: (dbg) Run:  kubectl --context addons-631036 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-631036 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-631036 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.172301649s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 ip
2025/10/25 08:32:49 [DEBUG] GET http://192.168.39.24:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.34s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.872091ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-631036
addons_test.go:332: (dbg) Run:  kubectl --context addons-631036 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-gg64c" [274413d3-cf62-4e8a-a462-c34623a92df7] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004024639s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 10.491267ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4b2tp" [060cbc46-1bf9-48ba-b6eb-9f0fe9e1a912] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00526636s
addons_test.go:463: (dbg) Run:  kubectl --context addons-631036 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1025 08:32:50.924874    9881 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1025 08:32:50.928485    9881 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1025 08:32:50.928511    9881 kapi.go:107] duration metric: took 3.647472ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.656512ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-631036 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-631036 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [2b233318-5607-4c9b-9fb3-cecd11dd135e] Pending
helpers_test.go:352: "task-pv-pod" [2b233318-5607-4c9b-9fb3-cecd11dd135e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [2b233318-5607-4c9b-9fb3-cecd11dd135e] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004044199s
addons_test.go:572: (dbg) Run:  kubectl --context addons-631036 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-631036 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-631036 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-631036 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-631036 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-631036 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-631036 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [d1237455-8c42-4227-b885-752d2e581257] Pending
helpers_test.go:352: "task-pv-pod-restore" [d1237455-8c42-4227-b885-752d2e581257] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [d1237455-8c42-4227-b885-752d2e581257] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004208101s
addons_test.go:614: (dbg) Run:  kubectl --context addons-631036 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-631036 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-631036 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-631036 addons disable volumesnapshots --alsologtostderr -v=1: (1.144170583s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-631036 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.981358431s)
--- PASS: TestAddons/parallel/CSI (63.91s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-631036 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-2869d" [65f7fa6f-11a1-4e77-9e39-2666933a5579] Pending
helpers_test.go:352: "headlamp-6945c6f4d-2869d" [65f7fa6f-11a1-4e77-9e39-2666933a5579] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-2869d" [65f7fa6f-11a1-4e77-9e39-2666933a5579] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.004821647s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-631036 addons disable headlamp --alsologtostderr -v=1: (6.039798838s)
--- PASS: TestAddons/parallel/Headlamp (22.99s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-zg648" [ab4bbce6-4c1d-4549-ac17-371d84762085] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003729176s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.71s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-631036 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-631036 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-631036 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [a2202688-fe7b-45f9-951f-2fe282ca93d1] Pending
helpers_test.go:352: "test-local-path" [a2202688-fe7b-45f9-951f-2fe282ca93d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [a2202688-fe7b-45f9-951f-2fe282ca93d1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [a2202688-fe7b-45f9-951f-2fe282ca93d1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.006542167s
addons_test.go:967: (dbg) Run:  kubectl --context addons-631036 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 ssh "cat /opt/local-path-provisioner/pvc-28e1dc7b-1f5a-4207-a5b2-acbed43ab42a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-631036 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-631036 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-631036 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.917565661s)
--- PASS: TestAddons/parallel/LocalPath (57.71s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.75s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-65m2r" [d049181d-68c1-439c-bfbb-61eff9e986fa] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.05297926s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.75s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-mhxfz" [44a86b7c-9745-4300-81d4-ccfc6f1bc807] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005742234s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-631036 addons disable yakd --alsologtostderr -v=1: (5.878703887s)
--- PASS: TestAddons/parallel/Yakd (10.89s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (78.81s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-631036
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-631036: (1m18.600814832s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-631036
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-631036
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-631036
--- PASS: TestAddons/StoppedEnableDisable (78.81s)

                                                
                                    
x
+
TestCertOptions (83.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-585228 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-585228 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m20.782471565s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-585228 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-585228 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-585228 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-585228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-585228
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-585228: (2.14940986s)
--- PASS: TestCertOptions (83.36s)

                                                
                                    
x
+
TestCertExpiration (302.59s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-097778 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E1025 09:27:12.833456    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-097778 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m9.383187655s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-097778 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-097778 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (52.278624452s)
helpers_test.go:175: Cleaning up "cert-expiration-097778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-097778
--- PASS: TestCertExpiration (302.59s)

                                                
                                    
x
+
TestForceSystemdFlag (49.52s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-811701 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-811701 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.449423909s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-811701 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-811701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-811701
--- PASS: TestForceSystemdFlag (49.52s)

                                                
                                    
x
+
TestForceSystemdEnv (55.89s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-185354 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-185354 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (55.007507554s)
helpers_test.go:175: Cleaning up "force-systemd-env-185354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-185354
--- PASS: TestForceSystemdEnv (55.89s)

                                                
                                    
x
+
TestErrorSpam/setup (37.67s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-340824 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-340824 --driver=kvm2  --container-runtime=crio
E1025 08:37:12.839916    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:12.846523    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:12.858081    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:12.879595    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:12.921110    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:13.002634    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:13.164293    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:13.486506    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:14.128555    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:15.410155    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:17.973136    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:23.095307    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-340824 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-340824 --driver=kvm2  --container-runtime=crio: (37.673059188s)
--- PASS: TestErrorSpam/setup (37.67s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 status
--- PASS: TestErrorSpam/status (0.65s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 pause
E1025 08:37:33.336852    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (4.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 stop: (2.073500665s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 stop: (1.093415899s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-340824 --log_dir /tmp/nospam-340824 stop: (1.144088617s)
--- PASS: TestErrorSpam/stop (4.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21796-5973/.minikube/files/etc/test/nested/copy/9881/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.26s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897515 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1025 08:37:53.818706    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:38:34.781465    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-897515 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m16.25891324s)
--- PASS: TestFunctional/serial/StartWithProxy (76.26s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.72s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1025 08:38:57.477206    9881 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897515 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-897515 --alsologtostderr -v=8: (32.721576198s)
functional_test.go:678: soft start took 32.722427812s for "functional-897515" cluster.
I1025 08:39:30.199176    9881 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (32.72s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-897515 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-897515 cache add registry.k8s.io/pause:3.1: (1.084254084s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-897515 cache add registry.k8s.io/pause:3.3: (1.066407448s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-897515 cache add registry.k8s.io/pause:latest: (1.074382792s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-897515 /tmp/TestFunctionalserialCacheCmdcacheadd_local2456747389/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 cache add minikube-local-cache-test:functional-897515
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-897515 cache add minikube-local-cache-test:functional-897515: (1.605346759s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 cache delete minikube-local-cache-test:functional-897515
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-897515
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897515 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (185.706269ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 kubectl -- --context functional-897515 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-897515 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897515 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 08:39:56.706437    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-897515 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.596058422s)
functional_test.go:776: restart took 33.596198927s for "functional-897515" cluster.
I1025 08:40:11.418796    9881 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (33.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-897515 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-897515 logs: (1.54880854s)
--- PASS: TestFunctional/serial/LogsCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 logs --file /tmp/TestFunctionalserialLogsFileCmd1653939240/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-897515 logs --file /tmp/TestFunctionalserialLogsFileCmd1653939240/001/logs.txt: (1.51089913s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.81s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-897515 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-897515
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-897515: exit status 115 (240.287238ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.108:31843 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-897515 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897515 config get cpus: exit status 14 (72.524981ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897515 config get cpus: exit status 14 (59.178217ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (31.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-897515 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-897515 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 15866: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (31.79s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897515 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-897515 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (127.348568ms)

                                                
                                                
-- stdout --
	* [functional-897515] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:40:21.354815   15441 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:40:21.355085   15441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:40:21.355097   15441 out.go:374] Setting ErrFile to fd 2...
	I1025 08:40:21.355101   15441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:40:21.355342   15441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
	I1025 08:40:21.355820   15441 out.go:368] Setting JSON to false
	I1025 08:40:21.356739   15441 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1371,"bootTime":1761380250,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:40:21.356844   15441 start.go:141] virtualization: kvm guest
	I1025 08:40:21.359294   15441 out.go:179] * [functional-897515] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 08:40:21.361382   15441 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 08:40:21.361411   15441 notify.go:220] Checking for updates...
	I1025 08:40:21.364185   15441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:40:21.365814   15441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	I1025 08:40:21.367387   15441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 08:40:21.368926   15441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 08:40:21.370421   15441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:40:21.372643   15441 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:40:21.373227   15441 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:40:21.407566   15441 out.go:179] * Using the kvm2 driver based on existing profile
	I1025 08:40:21.408997   15441 start.go:305] selected driver: kvm2
	I1025 08:40:21.409020   15441 start.go:925] validating driver "kvm2" against &{Name:functional-897515 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-897515 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:40:21.409155   15441 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:40:21.412141   15441 out.go:203] 
	W1025 08:40:21.413661   15441 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 08:40:21.415042   15441 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897515 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-897515 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-897515 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (127.406973ms)

                                                
                                                
-- stdout --
	* [functional-897515] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:40:21.608796   15483 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:40:21.609081   15483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:40:21.609091   15483 out.go:374] Setting ErrFile to fd 2...
	I1025 08:40:21.609098   15483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:40:21.609424   15483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
	I1025 08:40:21.609880   15483 out.go:368] Setting JSON to false
	I1025 08:40:21.610830   15483 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1372,"bootTime":1761380250,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 08:40:21.610930   15483 start.go:141] virtualization: kvm guest
	I1025 08:40:21.612752   15483 out.go:179] * [functional-897515] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1025 08:40:21.614789   15483 notify.go:220] Checking for updates...
	I1025 08:40:21.614854   15483 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 08:40:21.616570   15483 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:40:21.618157   15483 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	I1025 08:40:21.619461   15483 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 08:40:21.620803   15483 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 08:40:21.622232   15483 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:40:21.624413   15483 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:40:21.624887   15483 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:40:21.660980   15483 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1025 08:40:21.662377   15483 start.go:305] selected driver: kvm2
	I1025 08:40:21.662394   15483 start.go:925] validating driver "kvm2" against &{Name:functional-897515 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-897515 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:40:21.662494   15483 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:40:21.665276   15483 out.go:203] 
	W1025 08:40:21.666866   15483 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 08:40:21.668305   15483 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-897515 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-897515 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-pxckf" [3735773a-2640-4928-ba2f-211ce212d52e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-pxckf" [3735773a-2640-4928-ba2f-211ce212d52e] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 32.003460567s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.108:31750
functional_test.go:1680: http://192.168.39.108:31750: success! body:
Request served by hello-node-connect-7d85dfc575-pxckf

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.108:31750
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (32.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [e1c27c97-3f4b-401f-88de-21a8735def35] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006222804s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-897515 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-897515 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-897515 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-897515 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-897515 apply -f testdata/storage-provisioner/pod.yaml
I1025 08:40:28.935335    9881 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d0a20106-d4a1-4592-9ce4-fe33a446e9a3] Pending
helpers_test.go:352: "sp-pod" [d0a20106-d4a1-4592-9ce4-fe33a446e9a3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d0a20106-d4a1-4592-9ce4-fe33a446e9a3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 35.004733132s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-897515 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-897515 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-897515 delete -f testdata/storage-provisioner/pod.yaml: (1.313387754s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-897515 apply -f testdata/storage-provisioner/pod.yaml
I1025 08:41:05.501334    9881 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [050fae7a-1f2a-47c0-ac67-0b2e93a7cb16] Pending
helpers_test.go:352: "sp-pod" [050fae7a-1f2a-47c0-ac67-0b2e93a7cb16] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [050fae7a-1f2a-47c0-ac67-0b2e93a7cb16] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003981029s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-897515 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh -n functional-897515 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 cp functional-897515:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd850175238/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh -n functional-897515 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh -n functional-897515 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-897515 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-tr95b" [59966352-d9cf-43ca-a76f-912069fbf33d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-tr95b" [59966352-d9cf-43ca-a76f-912069fbf33d] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.018058951s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-897515 exec mysql-5bb876957f-tr95b -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-897515 exec mysql-5bb876957f-tr95b -- mysql -ppassword -e "show databases;": exit status 1 (321.848536ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1025 08:40:47.386398    9881 retry.go:31] will retry after 1.36904714s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-897515 exec mysql-5bb876957f-tr95b -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-897515 exec mysql-5bb876957f-tr95b -- mysql -ppassword -e "show databases;": exit status 1 (148.455934ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1025 08:40:48.904698    9881 retry.go:31] will retry after 1.799116647s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-897515 exec mysql-5bb876957f-tr95b -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-897515 exec mysql-5bb876957f-tr95b -- mysql -ppassword -e "show databases;": exit status 1 (168.032805ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1025 08:40:50.873069    9881 retry.go:31] will retry after 2.919649021s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-897515 exec mysql-5bb876957f-tr95b -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9881/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "sudo cat /etc/test/nested/copy/9881/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9881.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "sudo cat /etc/ssl/certs/9881.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9881.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "sudo cat /usr/share/ca-certificates/9881.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/98812.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "sudo cat /etc/ssl/certs/98812.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/98812.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "sudo cat /usr/share/ca-certificates/98812.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-897515 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897515 ssh "sudo systemctl is-active docker": exit status 1 (194.901726ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897515 ssh "sudo systemctl is-active containerd": exit status 1 (189.663665ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-897515 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-897515 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-6pmmq" [66f5060a-f086-424f-9ec7-57da653487f6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-6pmmq" [66f5060a-f086-424f-9ec7-57da653487f6] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.007518861s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-897515 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-897515
localhost/kicbase/echo-server:functional-897515
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897515 image ls --format short --alsologtostderr:
I1025 08:40:55.844410   16298 out.go:360] Setting OutFile to fd 1 ...
I1025 08:40:55.844626   16298 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:40:55.844634   16298 out.go:374] Setting ErrFile to fd 2...
I1025 08:40:55.844638   16298 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:40:55.844835   16298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
I1025 08:40:55.846085   16298 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:40:55.846312   16298 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:40:55.848868   16298 ssh_runner.go:195] Run: systemctl --version
I1025 08:40:55.851714   16298 main.go:141] libmachine: domain functional-897515 has defined MAC address 52:54:00:e1:49:40 in network mk-functional-897515
I1025 08:40:55.852307   16298 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e1:49:40", ip: ""} in network mk-functional-897515: {Iface:virbr1 ExpiryTime:2025-10-25 09:37:56 +0000 UTC Type:0 Mac:52:54:00:e1:49:40 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:functional-897515 Clientid:01:52:54:00:e1:49:40}
I1025 08:40:55.852337   16298 main.go:141] libmachine: domain functional-897515 has defined IP address 192.168.39.108 and MAC address 52:54:00:e1:49:40 in network mk-functional-897515
I1025 08:40:55.852515   16298 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/functional-897515/id_rsa Username:docker}
I1025 08:40:55.954559   16298 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-897515 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/library/nginx                 │ latest             │ 657fdcd1c3659 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-897515  │ 9056ab77afb8e │ 4.95MB │
│ localhost/minikube-local-cache-test     │ functional-897515  │ 66a7577f5c5a4 │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897515 image ls --format table --alsologtostderr:
I1025 08:40:59.315368   16369 out.go:360] Setting OutFile to fd 1 ...
I1025 08:40:59.315652   16369 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:40:59.315663   16369 out.go:374] Setting ErrFile to fd 2...
I1025 08:40:59.315670   16369 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:40:59.315890   16369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
I1025 08:40:59.316526   16369 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:40:59.316664   16369 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:40:59.319010   16369 ssh_runner.go:195] Run: systemctl --version
I1025 08:40:59.321615   16369 main.go:141] libmachine: domain functional-897515 has defined MAC address 52:54:00:e1:49:40 in network mk-functional-897515
I1025 08:40:59.322140   16369 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e1:49:40", ip: ""} in network mk-functional-897515: {Iface:virbr1 ExpiryTime:2025-10-25 09:37:56 +0000 UTC Type:0 Mac:52:54:00:e1:49:40 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:functional-897515 Clientid:01:52:54:00:e1:49:40}
I1025 08:40:59.322165   16369 main.go:141] libmachine: domain functional-897515 has defined IP address 192.168.39.108 and MAC address 52:54:00:e1:49:40 in network mk-functional-897515
I1025 08:40:59.322393   16369 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/functional-897515/id_rsa Username:docker}
I1025 08:40:59.416295   16369 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-897515 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"66a7577f5c5a41fa60c166228cdb3350b009c476d7419adc334558edb55fd3b2","repoDigests":["localhost/minikube-local-cache-test@sha256:319cec488abf0c41133af5be3315ac96d0ac4eaa8e6be60275f765625d38c389"],"repoTags":["localhost/minikube-local-cache-test:functional-897515"],"size":"3330"},{"id":"fc25172553
d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b5
0a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8"],"repoTags":["docker.io/library/nginx:latest"],"size":"155467611"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745d
f78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433
f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-897515"],"size":"4945146"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":[
"registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897515 image ls --format json --alsologtostderr:
I1025 08:40:59.062799   16358 out.go:360] Setting OutFile to fd 1 ...
I1025 08:40:59.063066   16358 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:40:59.063075   16358 out.go:374] Setting ErrFile to fd 2...
I1025 08:40:59.063080   16358 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:40:59.063375   16358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
I1025 08:40:59.063976   16358 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:40:59.064091   16358 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:40:59.066518   16358 ssh_runner.go:195] Run: systemctl --version
I1025 08:40:59.069137   16358 main.go:141] libmachine: domain functional-897515 has defined MAC address 52:54:00:e1:49:40 in network mk-functional-897515
I1025 08:40:59.069640   16358 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e1:49:40", ip: ""} in network mk-functional-897515: {Iface:virbr1 ExpiryTime:2025-10-25 09:37:56 +0000 UTC Type:0 Mac:52:54:00:e1:49:40 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:functional-897515 Clientid:01:52:54:00:e1:49:40}
I1025 08:40:59.069667   16358 main.go:141] libmachine: domain functional-897515 has defined IP address 192.168.39.108 and MAC address 52:54:00:e1:49:40 in network mk-functional-897515
I1025 08:40:59.069861   16358 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/functional-897515/id_rsa Username:docker}
I1025 08:40:59.158065   16358 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-897515 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 66a7577f5c5a41fa60c166228cdb3350b009c476d7419adc334558edb55fd3b2
repoDigests:
- localhost/minikube-local-cache-test@sha256:319cec488abf0c41133af5be3315ac96d0ac4eaa8e6be60275f765625d38c389
repoTags:
- localhost/minikube-local-cache-test:functional-897515
size: "3330"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-897515
size: "4945146"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897515 image ls --format yaml --alsologtostderr:
I1025 08:40:56.092172   16309 out.go:360] Setting OutFile to fd 1 ...
I1025 08:40:56.092537   16309 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:40:56.092552   16309 out.go:374] Setting ErrFile to fd 2...
I1025 08:40:56.092558   16309 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:40:56.093275   16309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
I1025 08:40:56.094607   16309 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:40:56.094771   16309 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:40:56.097674   16309 ssh_runner.go:195] Run: systemctl --version
I1025 08:40:56.100517   16309 main.go:141] libmachine: domain functional-897515 has defined MAC address 52:54:00:e1:49:40 in network mk-functional-897515
I1025 08:40:56.101095   16309 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e1:49:40", ip: ""} in network mk-functional-897515: {Iface:virbr1 ExpiryTime:2025-10-25 09:37:56 +0000 UTC Type:0 Mac:52:54:00:e1:49:40 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:functional-897515 Clientid:01:52:54:00:e1:49:40}
I1025 08:40:56.101136   16309 main.go:141] libmachine: domain functional-897515 has defined IP address 192.168.39.108 and MAC address 52:54:00:e1:49:40 in network mk-functional-897515
I1025 08:40:56.101363   16309 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/functional-897515/id_rsa Username:docker}
I1025 08:40:56.197820   16309 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897515 ssh pgrep buildkitd: exit status 1 (172.251871ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image build -t localhost/my-image:functional-897515 testdata/build --alsologtostderr
2025/10/25 08:40:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-897515 image build -t localhost/my-image:functional-897515 testdata/build --alsologtostderr: (3.802197178s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-897515 image build -t localhost/my-image:functional-897515 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f4aaee082af
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-897515
--> 820b142c573
Successfully tagged localhost/my-image:functional-897515
820b142c57358cc4f34ad1351b06a79366ce1e8e61e2eda5dd33867e499eaa92
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-897515 image build -t localhost/my-image:functional-897515 testdata/build --alsologtostderr:
I1025 08:40:56.503176   16330 out.go:360] Setting OutFile to fd 1 ...
I1025 08:40:56.503501   16330 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:40:56.503513   16330 out.go:374] Setting ErrFile to fd 2...
I1025 08:40:56.503518   16330 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:40:56.503712   16330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
I1025 08:40:56.504299   16330 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:40:56.505034   16330 config.go:182] Loaded profile config "functional-897515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:40:56.507493   16330 ssh_runner.go:195] Run: systemctl --version
I1025 08:40:56.510120   16330 main.go:141] libmachine: domain functional-897515 has defined MAC address 52:54:00:e1:49:40 in network mk-functional-897515
I1025 08:40:56.510607   16330 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e1:49:40", ip: ""} in network mk-functional-897515: {Iface:virbr1 ExpiryTime:2025-10-25 09:37:56 +0000 UTC Type:0 Mac:52:54:00:e1:49:40 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:functional-897515 Clientid:01:52:54:00:e1:49:40}
I1025 08:40:56.510645   16330 main.go:141] libmachine: domain functional-897515 has defined IP address 192.168.39.108 and MAC address 52:54:00:e1:49:40 in network mk-functional-897515
I1025 08:40:56.510857   16330 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/functional-897515/id_rsa Username:docker}
I1025 08:40:56.594979   16330 build_images.go:161] Building image from path: /tmp/build.1879831827.tar
I1025 08:40:56.595055   16330 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 08:40:56.611335   16330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1879831827.tar
I1025 08:40:56.619232   16330 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1879831827.tar: stat -c "%s %y" /var/lib/minikube/build/build.1879831827.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1879831827.tar': No such file or directory
I1025 08:40:56.619285   16330 ssh_runner.go:362] scp /tmp/build.1879831827.tar --> /var/lib/minikube/build/build.1879831827.tar (3072 bytes)
I1025 08:40:56.677147   16330 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1879831827
I1025 08:40:56.696323   16330 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1879831827 -xf /var/lib/minikube/build/build.1879831827.tar
I1025 08:40:56.713292   16330 crio.go:315] Building image: /var/lib/minikube/build/build.1879831827
I1025 08:40:56.713358   16330 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-897515 /var/lib/minikube/build/build.1879831827 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1025 08:41:00.211954   16330 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-897515 /var/lib/minikube/build/build.1879831827 --cgroup-manager=cgroupfs: (3.498562569s)
I1025 08:41:00.212037   16330 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1879831827
I1025 08:41:00.228502   16330 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1879831827.tar
I1025 08:41:00.241250   16330 build_images.go:217] Built localhost/my-image:functional-897515 from /tmp/build.1879831827.tar
I1025 08:41:00.241288   16330 build_images.go:133] succeeded building to: functional-897515
I1025 08:41:00.241293   16330 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.485856975s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-897515
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897515 /tmp/TestFunctionalparallelMountCmdany-port317624626/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761381620243042211" to /tmp/TestFunctionalparallelMountCmdany-port317624626/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761381620243042211" to /tmp/TestFunctionalparallelMountCmdany-port317624626/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761381620243042211" to /tmp/TestFunctionalparallelMountCmdany-port317624626/001/test-1761381620243042211
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897515 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (191.052047ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 08:40:20.434491    9881 retry.go:31] will retry after 606.906111ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 08:40 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 08:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 08:40 test-1761381620243042211
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh cat /mount-9p/test-1761381620243042211
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-897515 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [1f69082d-f003-4dcf-bc2e-ef183f21f920] Pending
helpers_test.go:352: "busybox-mount" [1f69082d-f003-4dcf-bc2e-ef183f21f920] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [1f69082d-f003-4dcf-bc2e-ef183f21f920] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [1f69082d-f003-4dcf-bc2e-ef183f21f920] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.005021991s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-897515 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897515 /tmp/TestFunctionalparallelMountCmdany-port317624626/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image load --daemon kicbase/echo-server:functional-897515 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-897515 image load --daemon kicbase/echo-server:functional-897515 --alsologtostderr: (1.379150797s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "259.646963ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "68.876289ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "360.679951ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "69.6283ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image load --daemon kicbase/echo-server:functional-897515 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-897515
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image load --daemon kicbase/echo-server:functional-897515 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image save kicbase/echo-server:functional-897515 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image rm kicbase/echo-server:functional-897515 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-897515
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 image save --daemon kicbase/echo-server:functional-897515 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-897515
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 service list -o json
functional_test.go:1504: Took "248.391589ms" to run "out/minikube-linux-amd64 -p functional-897515 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.108:31257
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.108:31257
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897515 /tmp/TestFunctionalparallelMountCmdspecific-port402610073/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897515 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (156.376327ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 08:40:30.614385    9881 retry.go:31] will retry after 452.730823ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897515 /tmp/TestFunctionalparallelMountCmdspecific-port402610073/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897515 ssh "sudo umount -f /mount-9p": exit status 1 (200.180738ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-897515 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897515 /tmp/TestFunctionalparallelMountCmdspecific-port402610073/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897515 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1003694128/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897515 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1003694128/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-897515 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1003694128/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-897515 ssh "findmnt -T" /mount1: exit status 1 (220.209858ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 08:40:32.082882    9881 retry.go:31] will retry after 624.302692ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-897515 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-897515 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897515 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1003694128/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897515 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1003694128/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-897515 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1003694128/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-897515
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-897515
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-897515
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (251.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1025 08:42:12.834435    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:42:40.550200    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:45:18.557808    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:45:18.564333    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:45:18.576727    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:45:18.598209    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:45:18.639460    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:45:18.721004    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:45:18.883299    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:45:19.205031    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:45:19.846993    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:45:21.128628    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:45:23.690358    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-890523 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (4m11.057282258s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (251.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- rollout status deployment/busybox
E1025 08:45:28.812089    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-890523 kubectl -- rollout status deployment/busybox: (5.367417655s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-ckbqp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-khhqf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-lzzhl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-ckbqp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-khhqf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-lzzhl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-ckbqp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-khhqf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-lzzhl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-ckbqp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-ckbqp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-khhqf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-khhqf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-lzzhl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 kubectl -- exec busybox-7b57f96db7-lzzhl -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 node add --alsologtostderr -v 5
E1025 08:45:39.053432    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:45:59.535175    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-890523 node add --alsologtostderr -v 5: (46.902241354s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-890523 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp testdata/cp-test.txt ha-890523:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3452916555/001/cp-test_ha-890523.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523:/home/docker/cp-test.txt ha-890523-m02:/home/docker/cp-test_ha-890523_ha-890523-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m02 "sudo cat /home/docker/cp-test_ha-890523_ha-890523-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523:/home/docker/cp-test.txt ha-890523-m03:/home/docker/cp-test_ha-890523_ha-890523-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m03 "sudo cat /home/docker/cp-test_ha-890523_ha-890523-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523:/home/docker/cp-test.txt ha-890523-m04:/home/docker/cp-test_ha-890523_ha-890523-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m04 "sudo cat /home/docker/cp-test_ha-890523_ha-890523-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp testdata/cp-test.txt ha-890523-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3452916555/001/cp-test_ha-890523-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523-m02:/home/docker/cp-test.txt ha-890523:/home/docker/cp-test_ha-890523-m02_ha-890523.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523 "sudo cat /home/docker/cp-test_ha-890523-m02_ha-890523.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523-m02:/home/docker/cp-test.txt ha-890523-m03:/home/docker/cp-test_ha-890523-m02_ha-890523-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m03 "sudo cat /home/docker/cp-test_ha-890523-m02_ha-890523-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523-m02:/home/docker/cp-test.txt ha-890523-m04:/home/docker/cp-test_ha-890523-m02_ha-890523-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m04 "sudo cat /home/docker/cp-test_ha-890523-m02_ha-890523-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp testdata/cp-test.txt ha-890523-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3452916555/001/cp-test_ha-890523-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523-m03:/home/docker/cp-test.txt ha-890523:/home/docker/cp-test_ha-890523-m03_ha-890523.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523 "sudo cat /home/docker/cp-test_ha-890523-m03_ha-890523.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523-m03:/home/docker/cp-test.txt ha-890523-m02:/home/docker/cp-test_ha-890523-m03_ha-890523-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m02 "sudo cat /home/docker/cp-test_ha-890523-m03_ha-890523-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523-m03:/home/docker/cp-test.txt ha-890523-m04:/home/docker/cp-test_ha-890523-m03_ha-890523-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m04 "sudo cat /home/docker/cp-test_ha-890523-m03_ha-890523-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp testdata/cp-test.txt ha-890523-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3452916555/001/cp-test_ha-890523-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523-m04:/home/docker/cp-test.txt ha-890523:/home/docker/cp-test_ha-890523-m04_ha-890523.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523 "sudo cat /home/docker/cp-test_ha-890523-m04_ha-890523.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523-m04:/home/docker/cp-test.txt ha-890523-m02:/home/docker/cp-test_ha-890523-m04_ha-890523-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m02 "sudo cat /home/docker/cp-test_ha-890523-m04_ha-890523-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 cp ha-890523-m04:/home/docker/cp-test.txt ha-890523-m03:/home/docker/cp-test_ha-890523-m04_ha-890523-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 ssh -n ha-890523-m03 "sudo cat /home/docker/cp-test_ha-890523-m04_ha-890523-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (85.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 node stop m02 --alsologtostderr -v 5
E1025 08:46:40.496816    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:47:12.834414    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-890523 node stop m02 --alsologtostderr -v 5: (1m24.566523174s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-890523 status --alsologtostderr -v 5: exit status 7 (547.431862ms)

                                                
                                                
-- stdout --
	ha-890523
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-890523-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-890523-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-890523-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:47:58.413695   19649 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:47:58.413794   19649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:47:58.413799   19649 out.go:374] Setting ErrFile to fd 2...
	I1025 08:47:58.413802   19649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:47:58.414044   19649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
	I1025 08:47:58.414280   19649 out.go:368] Setting JSON to false
	I1025 08:47:58.414322   19649 mustload.go:65] Loading cluster: ha-890523
	I1025 08:47:58.414473   19649 notify.go:220] Checking for updates...
	I1025 08:47:58.414774   19649 config.go:182] Loaded profile config "ha-890523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:47:58.414789   19649 status.go:174] checking status of ha-890523 ...
	I1025 08:47:58.417034   19649 status.go:371] ha-890523 host status = "Running" (err=<nil>)
	I1025 08:47:58.417053   19649 host.go:66] Checking if "ha-890523" exists ...
	I1025 08:47:58.419857   19649 main.go:141] libmachine: domain ha-890523 has defined MAC address 52:54:00:64:44:4f in network mk-ha-890523
	I1025 08:47:58.420384   19649 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:64:44:4f", ip: ""} in network mk-ha-890523: {Iface:virbr1 ExpiryTime:2025-10-25 09:41:29 +0000 UTC Type:0 Mac:52:54:00:64:44:4f Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-890523 Clientid:01:52:54:00:64:44:4f}
	I1025 08:47:58.420427   19649 main.go:141] libmachine: domain ha-890523 has defined IP address 192.168.39.43 and MAC address 52:54:00:64:44:4f in network mk-ha-890523
	I1025 08:47:58.420599   19649 host.go:66] Checking if "ha-890523" exists ...
	I1025 08:47:58.420885   19649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:47:58.423746   19649 main.go:141] libmachine: domain ha-890523 has defined MAC address 52:54:00:64:44:4f in network mk-ha-890523
	I1025 08:47:58.424269   19649 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:64:44:4f", ip: ""} in network mk-ha-890523: {Iface:virbr1 ExpiryTime:2025-10-25 09:41:29 +0000 UTC Type:0 Mac:52:54:00:64:44:4f Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-890523 Clientid:01:52:54:00:64:44:4f}
	I1025 08:47:58.424303   19649 main.go:141] libmachine: domain ha-890523 has defined IP address 192.168.39.43 and MAC address 52:54:00:64:44:4f in network mk-ha-890523
	I1025 08:47:58.424516   19649 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/ha-890523/id_rsa Username:docker}
	I1025 08:47:58.521929   19649 ssh_runner.go:195] Run: systemctl --version
	I1025 08:47:58.530476   19649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:47:58.556183   19649 kubeconfig.go:125] found "ha-890523" server: "https://192.168.39.254:8443"
	I1025 08:47:58.556229   19649 api_server.go:166] Checking apiserver status ...
	I1025 08:47:58.556286   19649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:47:58.583346   19649 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1365/cgroup
	W1025 08:47:58.596501   19649 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1365/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 08:47:58.596570   19649 ssh_runner.go:195] Run: ls
	I1025 08:47:58.601838   19649 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1025 08:47:58.607166   19649 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1025 08:47:58.607193   19649 status.go:463] ha-890523 apiserver status = Running (err=<nil>)
	I1025 08:47:58.607203   19649 status.go:176] ha-890523 status: &{Name:ha-890523 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:47:58.607222   19649 status.go:174] checking status of ha-890523-m02 ...
	I1025 08:47:58.609033   19649 status.go:371] ha-890523-m02 host status = "Stopped" (err=<nil>)
	I1025 08:47:58.609058   19649 status.go:384] host is not running, skipping remaining checks
	I1025 08:47:58.609065   19649 status.go:176] ha-890523-m02 status: &{Name:ha-890523-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:47:58.609081   19649 status.go:174] checking status of ha-890523-m03 ...
	I1025 08:47:58.610695   19649 status.go:371] ha-890523-m03 host status = "Running" (err=<nil>)
	I1025 08:47:58.610717   19649 host.go:66] Checking if "ha-890523-m03" exists ...
	I1025 08:47:58.613550   19649 main.go:141] libmachine: domain ha-890523-m03 has defined MAC address 52:54:00:93:fc:07 in network mk-ha-890523
	I1025 08:47:58.614099   19649 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:fc:07", ip: ""} in network mk-ha-890523: {Iface:virbr1 ExpiryTime:2025-10-25 09:44:07 +0000 UTC Type:0 Mac:52:54:00:93:fc:07 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-890523-m03 Clientid:01:52:54:00:93:fc:07}
	I1025 08:47:58.614131   19649 main.go:141] libmachine: domain ha-890523-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:93:fc:07 in network mk-ha-890523
	I1025 08:47:58.614348   19649 host.go:66] Checking if "ha-890523-m03" exists ...
	I1025 08:47:58.614616   19649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:47:58.617436   19649 main.go:141] libmachine: domain ha-890523-m03 has defined MAC address 52:54:00:93:fc:07 in network mk-ha-890523
	I1025 08:47:58.617904   19649 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:fc:07", ip: ""} in network mk-ha-890523: {Iface:virbr1 ExpiryTime:2025-10-25 09:44:07 +0000 UTC Type:0 Mac:52:54:00:93:fc:07 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-890523-m03 Clientid:01:52:54:00:93:fc:07}
	I1025 08:47:58.617940   19649 main.go:141] libmachine: domain ha-890523-m03 has defined IP address 192.168.39.206 and MAC address 52:54:00:93:fc:07 in network mk-ha-890523
	I1025 08:47:58.618143   19649 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/ha-890523-m03/id_rsa Username:docker}
	I1025 08:47:58.704114   19649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:47:58.726350   19649 kubeconfig.go:125] found "ha-890523" server: "https://192.168.39.254:8443"
	I1025 08:47:58.726375   19649 api_server.go:166] Checking apiserver status ...
	I1025 08:47:58.726416   19649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:47:58.754803   19649 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1817/cgroup
	W1025 08:47:58.769383   19649 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1817/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 08:47:58.769445   19649 ssh_runner.go:195] Run: ls
	I1025 08:47:58.775673   19649 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1025 08:47:58.782195   19649 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1025 08:47:58.782219   19649 status.go:463] ha-890523-m03 apiserver status = Running (err=<nil>)
	I1025 08:47:58.782227   19649 status.go:176] ha-890523-m03 status: &{Name:ha-890523-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:47:58.782257   19649 status.go:174] checking status of ha-890523-m04 ...
	I1025 08:47:58.783808   19649 status.go:371] ha-890523-m04 host status = "Running" (err=<nil>)
	I1025 08:47:58.783828   19649 host.go:66] Checking if "ha-890523-m04" exists ...
	I1025 08:47:58.786590   19649 main.go:141] libmachine: domain ha-890523-m04 has defined MAC address 52:54:00:ca:27:46 in network mk-ha-890523
	I1025 08:47:58.787008   19649 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ca:27:46", ip: ""} in network mk-ha-890523: {Iface:virbr1 ExpiryTime:2025-10-25 09:45:50 +0000 UTC Type:0 Mac:52:54:00:ca:27:46 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-890523-m04 Clientid:01:52:54:00:ca:27:46}
	I1025 08:47:58.787035   19649 main.go:141] libmachine: domain ha-890523-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:ca:27:46 in network mk-ha-890523
	I1025 08:47:58.787252   19649 host.go:66] Checking if "ha-890523-m04" exists ...
	I1025 08:47:58.787456   19649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:47:58.789890   19649 main.go:141] libmachine: domain ha-890523-m04 has defined MAC address 52:54:00:ca:27:46 in network mk-ha-890523
	I1025 08:47:58.790325   19649 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ca:27:46", ip: ""} in network mk-ha-890523: {Iface:virbr1 ExpiryTime:2025-10-25 09:45:50 +0000 UTC Type:0 Mac:52:54:00:ca:27:46 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-890523-m04 Clientid:01:52:54:00:ca:27:46}
	I1025 08:47:58.790354   19649 main.go:141] libmachine: domain ha-890523-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:ca:27:46 in network mk-ha-890523
	I1025 08:47:58.790492   19649 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/ha-890523-m04/id_rsa Username:docker}
	I1025 08:47:58.875626   19649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:47:58.897936   19649 status.go:176] ha-890523-m04 status: &{Name:ha-890523-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (85.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 node start m02 --alsologtostderr -v 5
E1025 08:48:02.419623    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-890523 node start m02 --alsologtostderr -v 5: (43.926297708s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (363.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 stop --alsologtostderr -v 5
E1025 08:50:18.558168    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:50:46.263754    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:52:12.836896    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-890523 stop --alsologtostderr -v 5: (3m59.872973996s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 start --wait true --alsologtostderr -v 5
E1025 08:53:35.913485    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-890523 start --wait true --alsologtostderr -v 5: (2m3.521633395s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (363.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-890523 node delete m03 --alsologtostderr -v 5: (17.882070338s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (244.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 stop --alsologtostderr -v 5
E1025 08:55:18.558360    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:57:12.835344    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-890523 stop --alsologtostderr -v 5: (4m4.06568768s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-890523 status --alsologtostderr -v 5: exit status 7 (63.576423ms)

                                                
                                                
-- stdout --
	ha-890523
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-890523-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-890523-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:59:11.583896   22834 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:59:11.584146   22834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:59:11.584156   22834 out.go:374] Setting ErrFile to fd 2...
	I1025 08:59:11.584160   22834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:59:11.584373   22834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
	I1025 08:59:11.584550   22834 out.go:368] Setting JSON to false
	I1025 08:59:11.584577   22834 mustload.go:65] Loading cluster: ha-890523
	I1025 08:59:11.584681   22834 notify.go:220] Checking for updates...
	I1025 08:59:11.584930   22834 config.go:182] Loaded profile config "ha-890523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:59:11.584945   22834 status.go:174] checking status of ha-890523 ...
	I1025 08:59:11.587157   22834 status.go:371] ha-890523 host status = "Stopped" (err=<nil>)
	I1025 08:59:11.587173   22834 status.go:384] host is not running, skipping remaining checks
	I1025 08:59:11.587177   22834 status.go:176] ha-890523 status: &{Name:ha-890523 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:59:11.587193   22834 status.go:174] checking status of ha-890523-m02 ...
	I1025 08:59:11.588548   22834 status.go:371] ha-890523-m02 host status = "Stopped" (err=<nil>)
	I1025 08:59:11.588565   22834 status.go:384] host is not running, skipping remaining checks
	I1025 08:59:11.588569   22834 status.go:176] ha-890523-m02 status: &{Name:ha-890523-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:59:11.588583   22834 status.go:174] checking status of ha-890523-m04 ...
	I1025 08:59:11.589726   22834 status.go:371] ha-890523-m04 host status = "Stopped" (err=<nil>)
	I1025 08:59:11.589740   22834 status.go:384] host is not running, skipping remaining checks
	I1025 08:59:11.589744   22834 status.go:176] ha-890523-m04 status: &{Name:ha-890523-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (244.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (107.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1025 09:00:18.560850    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-890523 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m46.476111903s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (107.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 node add --control-plane --alsologtostderr -v 5
E1025 09:01:41.628163    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:02:12.833873    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-890523 node add --control-plane --alsologtostderr -v 5: (1m14.813305796s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-890523 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (73.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-853873 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-853873 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m13.955740662s)
--- PASS: TestJSONOutput/start/Command (73.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-853873 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-853873 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.93s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-853873 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-853873 --output=json --user=testUser: (6.931828527s)
--- PASS: TestJSONOutput/stop/Command (6.93s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-789198 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-789198 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (79.259012ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2cd4eaa5-bdb0-4b90-93df-eed195df8f7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-789198] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9fd1901e-fa46-4fa3-bd35-e2e75955fcfb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21796"}}
	{"specversion":"1.0","id":"1aba4de8-f391-4364-a495-75992b8c6df0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7d931f88-2740-4c6b-8bea-ad269fb333c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig"}}
	{"specversion":"1.0","id":"6c2dc08c-069f-40c9-809f-beb6c403e8cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube"}}
	{"specversion":"1.0","id":"1fdf6dbf-5edd-4c8b-b590-dfa791ebfe62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2c2c2b8d-51a8-49e6-86d6-84db27bca752","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bd0e3fc6-585b-4912-a7d3-a9aa7bbd8e70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-789198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-789198
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (82.57s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-118592 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-118592 --driver=kvm2  --container-runtime=crio: (40.538265546s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-120784 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-120784 --driver=kvm2  --container-runtime=crio: (39.352217924s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-118592
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-120784
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-120784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-120784
helpers_test.go:175: Cleaning up "first-118592" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-118592
--- PASS: TestMinikubeProfile (82.57s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-145371 --memory=3072 --mount-string /tmp/TestMountStartserial140031835/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1025 09:05:18.561860    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-145371 --memory=3072 --mount-string /tmp/TestMountStartserial140031835/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.548063962s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-145371 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-145371 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (22.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-158997 --memory=3072 --mount-string /tmp/TestMountStartserial140031835/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-158997 --memory=3072 --mount-string /tmp/TestMountStartserial140031835/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.855258299s)
--- PASS: TestMountStart/serial/StartWithMountSecond (22.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-158997 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-158997 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-145371 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-158997 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-158997 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-158997
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-158997: (1.295409582s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-158997
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-158997: (17.020122271s)
--- PASS: TestMountStart/serial/RestartStopped (18.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-158997 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-158997 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (128.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-557334 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1025 09:07:12.833410    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-557334 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m8.192926048s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (128.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-557334 -- rollout status deployment/busybox: (4.856274968s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- exec busybox-7b57f96db7-97vc8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- exec busybox-7b57f96db7-lnwcr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- exec busybox-7b57f96db7-97vc8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- exec busybox-7b57f96db7-lnwcr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- exec busybox-7b57f96db7-97vc8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- exec busybox-7b57f96db7-lnwcr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- exec busybox-7b57f96db7-97vc8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- exec busybox-7b57f96db7-97vc8 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- exec busybox-7b57f96db7-lnwcr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-557334 -- exec busybox-7b57f96db7-lnwcr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-557334 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-557334 -v=5 --alsologtostderr: (43.075050542s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.53s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-557334 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 cp testdata/cp-test.txt multinode-557334:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 cp multinode-557334:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4190208342/001/cp-test_multinode-557334.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 cp multinode-557334:/home/docker/cp-test.txt multinode-557334-m02:/home/docker/cp-test_multinode-557334_multinode-557334-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334-m02 "sudo cat /home/docker/cp-test_multinode-557334_multinode-557334-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 cp multinode-557334:/home/docker/cp-test.txt multinode-557334-m03:/home/docker/cp-test_multinode-557334_multinode-557334-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334-m03 "sudo cat /home/docker/cp-test_multinode-557334_multinode-557334-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 cp testdata/cp-test.txt multinode-557334-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 cp multinode-557334-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4190208342/001/cp-test_multinode-557334-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 cp multinode-557334-m02:/home/docker/cp-test.txt multinode-557334:/home/docker/cp-test_multinode-557334-m02_multinode-557334.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334 "sudo cat /home/docker/cp-test_multinode-557334-m02_multinode-557334.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 cp multinode-557334-m02:/home/docker/cp-test.txt multinode-557334-m03:/home/docker/cp-test_multinode-557334-m02_multinode-557334-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334-m03 "sudo cat /home/docker/cp-test_multinode-557334-m02_multinode-557334-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 cp testdata/cp-test.txt multinode-557334-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 cp multinode-557334-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4190208342/001/cp-test_multinode-557334-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 cp multinode-557334-m03:/home/docker/cp-test.txt multinode-557334:/home/docker/cp-test_multinode-557334-m03_multinode-557334.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334 "sudo cat /home/docker/cp-test_multinode-557334-m03_multinode-557334.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 cp multinode-557334-m03:/home/docker/cp-test.txt multinode-557334-m02:/home/docker/cp-test_multinode-557334-m03_multinode-557334-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 ssh -n multinode-557334-m02 "sudo cat /home/docker/cp-test_multinode-557334-m03_multinode-557334-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-557334 node stop m03: (1.521207467s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-557334 status: exit status 7 (326.706771ms)

                                                
                                                
-- stdout --
	multinode-557334
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-557334-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-557334-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-557334 status --alsologtostderr: exit status 7 (330.596137ms)

                                                
                                                
-- stdout --
	multinode-557334
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-557334-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-557334-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:09:20.588873   28533 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:09:20.589119   28533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:09:20.589129   28533 out.go:374] Setting ErrFile to fd 2...
	I1025 09:09:20.589133   28533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:09:20.589363   28533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
	I1025 09:09:20.589527   28533 out.go:368] Setting JSON to false
	I1025 09:09:20.589556   28533 mustload.go:65] Loading cluster: multinode-557334
	I1025 09:09:20.589638   28533 notify.go:220] Checking for updates...
	I1025 09:09:20.589925   28533 config.go:182] Loaded profile config "multinode-557334": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:09:20.589938   28533 status.go:174] checking status of multinode-557334 ...
	I1025 09:09:20.592370   28533 status.go:371] multinode-557334 host status = "Running" (err=<nil>)
	I1025 09:09:20.592392   28533 host.go:66] Checking if "multinode-557334" exists ...
	I1025 09:09:20.595421   28533 main.go:141] libmachine: domain multinode-557334 has defined MAC address 52:54:00:0a:f8:70 in network mk-multinode-557334
	I1025 09:09:20.595934   28533 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0a:f8:70", ip: ""} in network mk-multinode-557334: {Iface:virbr1 ExpiryTime:2025-10-25 10:06:28 +0000 UTC Type:0 Mac:52:54:00:0a:f8:70 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-557334 Clientid:01:52:54:00:0a:f8:70}
	I1025 09:09:20.595970   28533 main.go:141] libmachine: domain multinode-557334 has defined IP address 192.168.39.58 and MAC address 52:54:00:0a:f8:70 in network mk-multinode-557334
	I1025 09:09:20.596136   28533 host.go:66] Checking if "multinode-557334" exists ...
	I1025 09:09:20.596405   28533 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:09:20.598772   28533 main.go:141] libmachine: domain multinode-557334 has defined MAC address 52:54:00:0a:f8:70 in network mk-multinode-557334
	I1025 09:09:20.599147   28533 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0a:f8:70", ip: ""} in network mk-multinode-557334: {Iface:virbr1 ExpiryTime:2025-10-25 10:06:28 +0000 UTC Type:0 Mac:52:54:00:0a:f8:70 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-557334 Clientid:01:52:54:00:0a:f8:70}
	I1025 09:09:20.599171   28533 main.go:141] libmachine: domain multinode-557334 has defined IP address 192.168.39.58 and MAC address 52:54:00:0a:f8:70 in network mk-multinode-557334
	I1025 09:09:20.599345   28533 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/multinode-557334/id_rsa Username:docker}
	I1025 09:09:20.683203   28533 ssh_runner.go:195] Run: systemctl --version
	I1025 09:09:20.690524   28533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:09:20.707986   28533 kubeconfig.go:125] found "multinode-557334" server: "https://192.168.39.58:8443"
	I1025 09:09:20.708031   28533 api_server.go:166] Checking apiserver status ...
	I1025 09:09:20.708082   28533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:09:20.728232   28533 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1332/cgroup
	W1025 09:09:20.740365   28533 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1332/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:09:20.740432   28533 ssh_runner.go:195] Run: ls
	I1025 09:09:20.745372   28533 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1025 09:09:20.750443   28533 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1025 09:09:20.750472   28533 status.go:463] multinode-557334 apiserver status = Running (err=<nil>)
	I1025 09:09:20.750484   28533 status.go:176] multinode-557334 status: &{Name:multinode-557334 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:09:20.750508   28533 status.go:174] checking status of multinode-557334-m02 ...
	I1025 09:09:20.752284   28533 status.go:371] multinode-557334-m02 host status = "Running" (err=<nil>)
	I1025 09:09:20.752308   28533 host.go:66] Checking if "multinode-557334-m02" exists ...
	I1025 09:09:20.754978   28533 main.go:141] libmachine: domain multinode-557334-m02 has defined MAC address 52:54:00:9d:27:ac in network mk-multinode-557334
	I1025 09:09:20.755399   28533 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9d:27:ac", ip: ""} in network mk-multinode-557334: {Iface:virbr1 ExpiryTime:2025-10-25 10:07:51 +0000 UTC Type:0 Mac:52:54:00:9d:27:ac Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-557334-m02 Clientid:01:52:54:00:9d:27:ac}
	I1025 09:09:20.755436   28533 main.go:141] libmachine: domain multinode-557334-m02 has defined IP address 192.168.39.50 and MAC address 52:54:00:9d:27:ac in network mk-multinode-557334
	I1025 09:09:20.755574   28533 host.go:66] Checking if "multinode-557334-m02" exists ...
	I1025 09:09:20.755795   28533 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:09:20.758213   28533 main.go:141] libmachine: domain multinode-557334-m02 has defined MAC address 52:54:00:9d:27:ac in network mk-multinode-557334
	I1025 09:09:20.758631   28533 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9d:27:ac", ip: ""} in network mk-multinode-557334: {Iface:virbr1 ExpiryTime:2025-10-25 10:07:51 +0000 UTC Type:0 Mac:52:54:00:9d:27:ac Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:multinode-557334-m02 Clientid:01:52:54:00:9d:27:ac}
	I1025 09:09:20.758665   28533 main.go:141] libmachine: domain multinode-557334-m02 has defined IP address 192.168.39.50 and MAC address 52:54:00:9d:27:ac in network mk-multinode-557334
	I1025 09:09:20.758813   28533 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21796-5973/.minikube/machines/multinode-557334-m02/id_rsa Username:docker}
	I1025 09:09:20.839098   28533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:09:20.859110   28533 status.go:176] multinode-557334-m02 status: &{Name:multinode-557334-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:09:20.859141   28533 status.go:174] checking status of multinode-557334-m03 ...
	I1025 09:09:20.861149   28533 status.go:371] multinode-557334-m03 host status = "Stopped" (err=<nil>)
	I1025 09:09:20.861176   28533 status.go:384] host is not running, skipping remaining checks
	I1025 09:09:20.861184   28533 status.go:176] multinode-557334-m03 status: &{Name:multinode-557334-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-557334 node start m03 -v=5 --alsologtostderr: (41.219775151s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (286.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-557334
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-557334
E1025 09:10:15.917632    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:10:18.558689    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:12:12.837225    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-557334: (2m44.284564033s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-557334 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-557334 --wait=true -v=5 --alsologtostderr: (2m1.909180449s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-557334
--- PASS: TestMultiNode/serial/RestartKeepsNodes (286.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-557334 node delete m03: (2.119259767s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (175.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 stop
E1025 09:15:18.557472    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:17:12.836942    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-557334 stop: (2m55.40359657s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-557334 status: exit status 7 (63.606457ms)

                                                
                                                
-- stdout --
	multinode-557334
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-557334-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-557334 status --alsologtostderr: exit status 7 (62.802195ms)

                                                
                                                
-- stdout --
	multinode-557334
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-557334-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:17:47.005311   31350 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:17:47.005548   31350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:17:47.005557   31350 out.go:374] Setting ErrFile to fd 2...
	I1025 09:17:47.005561   31350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:17:47.005754   31350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
	I1025 09:17:47.005906   31350 out.go:368] Setting JSON to false
	I1025 09:17:47.005941   31350 mustload.go:65] Loading cluster: multinode-557334
	I1025 09:17:47.006089   31350 notify.go:220] Checking for updates...
	I1025 09:17:47.006329   31350 config.go:182] Loaded profile config "multinode-557334": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:17:47.006343   31350 status.go:174] checking status of multinode-557334 ...
	I1025 09:17:47.008673   31350 status.go:371] multinode-557334 host status = "Stopped" (err=<nil>)
	I1025 09:17:47.008688   31350 status.go:384] host is not running, skipping remaining checks
	I1025 09:17:47.008693   31350 status.go:176] multinode-557334 status: &{Name:multinode-557334 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:17:47.008709   31350 status.go:174] checking status of multinode-557334-m02 ...
	I1025 09:17:47.009991   31350 status.go:371] multinode-557334-m02 host status = "Stopped" (err=<nil>)
	I1025 09:17:47.010005   31350 status.go:384] host is not running, skipping remaining checks
	I1025 09:17:47.010009   31350 status.go:176] multinode-557334-m02 status: &{Name:multinode-557334-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (175.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (115.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-557334 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1025 09:18:21.631041    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-557334 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.933693626s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-557334 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (115.39s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-557334
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-557334-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-557334-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (82.510247ms)

                                                
                                                
-- stdout --
	* [multinode-557334-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-557334-m02' is duplicated with machine name 'multinode-557334-m02' in profile 'multinode-557334'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-557334-m03 --driver=kvm2  --container-runtime=crio
E1025 09:20:18.561853    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-557334-m03 --driver=kvm2  --container-runtime=crio: (39.485616965s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-557334
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-557334: exit status 80 (202.447639ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-557334 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-557334-m03 already exists in multinode-557334-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-557334-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.67s)

                                                
                                    
x
+
TestScheduledStopUnix (110.19s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-993756 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-993756 --memory=3072 --driver=kvm2  --container-runtime=crio: (38.553379592s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-993756 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-993756 -n scheduled-stop-993756
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-993756 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1025 09:23:05.839470    9881 retry.go:31] will retry after 110.478µs: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.840661    9881 retry.go:31] will retry after 205.256µs: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.841826    9881 retry.go:31] will retry after 180.041µs: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.842987    9881 retry.go:31] will retry after 311.223µs: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.844129    9881 retry.go:31] will retry after 701.735µs: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.845303    9881 retry.go:31] will retry after 480.661µs: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.846457    9881 retry.go:31] will retry after 1.689641ms: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.848692    9881 retry.go:31] will retry after 1.54395ms: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.850910    9881 retry.go:31] will retry after 3.589056ms: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.855172    9881 retry.go:31] will retry after 2.54591ms: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.858450    9881 retry.go:31] will retry after 6.973174ms: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.865735    9881 retry.go:31] will retry after 8.97537ms: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.875036    9881 retry.go:31] will retry after 19.152169ms: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.895348    9881 retry.go:31] will retry after 21.365145ms: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.917611    9881 retry.go:31] will retry after 14.742535ms: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
I1025 09:23:05.932860    9881 retry.go:31] will retry after 27.081599ms: open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/scheduled-stop-993756/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-993756 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-993756 -n scheduled-stop-993756
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-993756
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-993756 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-993756
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-993756: exit status 7 (62.164453ms)

                                                
                                                
-- stdout --
	scheduled-stop-993756
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-993756 -n scheduled-stop-993756
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-993756 -n scheduled-stop-993756: exit status 7 (59.617098ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-993756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-993756
--- PASS: TestScheduledStopUnix (110.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (167.62s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3740280460 start -p running-upgrade-026829 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3740280460 start -p running-upgrade-026829 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m42.719610195s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-026829 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-026829 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.414447296s)
helpers_test.go:175: Cleaning up "running-upgrade-026829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-026829
--- PASS: TestRunningBinaryUpgrade (167.62s)

                                                
                                    
x
+
TestKubernetesUpgrade (176.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-254344 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-254344 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.84402251s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-254344
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-254344: (2.157578969s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-254344 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-254344 status --format={{.Host}}: exit status 7 (89.372206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-254344 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1025 09:25:18.557629    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-254344 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.690528621s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-254344 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-254344 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-254344 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (99.383387ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-254344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-254344
	    minikube start -p kubernetes-upgrade-254344 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2543442 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-254344 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-254344 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-254344 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.045621608s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-254344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-254344
--- PASS: TestKubernetesUpgrade (176.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-024391 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-024391 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (96.920427ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-024391] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (78.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-024391 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-024391 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m18.680037211s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-024391 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (78.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (31.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-024391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-024391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (29.938472089s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-024391 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-024391 status -o json: exit status 2 (232.927089ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-024391","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-024391
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (31.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-816358 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-816358 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (136.590891ms)

                                                
                                                
-- stdout --
	* [false-816358] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:25:45.081332   35944 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:25:45.081661   35944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:25:45.081672   35944 out.go:374] Setting ErrFile to fd 2...
	I1025 09:25:45.081679   35944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:25:45.081996   35944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-5973/.minikube/bin
	I1025 09:25:45.082644   35944 out.go:368] Setting JSON to false
	I1025 09:25:45.083835   35944 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4095,"bootTime":1761380250,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:25:45.083958   35944 start.go:141] virtualization: kvm guest
	I1025 09:25:45.087333   35944 out.go:179] * [false-816358] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:25:45.088985   35944 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:25:45.089006   35944 notify.go:220] Checking for updates...
	I1025 09:25:45.091930   35944 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:25:45.093468   35944 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-5973/kubeconfig
	I1025 09:25:45.098543   35944 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-5973/.minikube
	I1025 09:25:45.100160   35944 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:25:45.101558   35944 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:25:45.103684   35944 config.go:182] Loaded profile config "NoKubernetes-024391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1025 09:25:45.103835   35944 config.go:182] Loaded profile config "kubernetes-upgrade-254344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:25:45.103962   35944 config.go:182] Loaded profile config "running-upgrade-026829": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1025 09:25:45.104080   35944 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:25:45.142628   35944 out.go:179] * Using the kvm2 driver based on user configuration
	I1025 09:25:45.144327   35944 start.go:305] selected driver: kvm2
	I1025 09:25:45.144362   35944 start.go:925] validating driver "kvm2" against <nil>
	I1025 09:25:45.144377   35944 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:25:45.147092   35944 out.go:203] 
	W1025 09:25:45.148793   35944 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1025 09:25:45.150361   35944 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-816358 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-816358

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-816358

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-816358

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-816358

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-816358

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-816358

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-816358

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-816358

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-816358

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-816358

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-816358

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-816358" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-816358" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:25:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.84:8443
name: NoKubernetes-024391
contexts:
- context:
cluster: NoKubernetes-024391
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:25:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-024391
name: NoKubernetes-024391
current-context: NoKubernetes-024391
kind: Config
users:
- name: NoKubernetes-024391
user:
client-certificate: /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/NoKubernetes-024391/client.crt
client-key: /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/NoKubernetes-024391/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-816358

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-816358"

                                                
                                                
----------------------- debugLogs end: false-816358 [took: 3.898010281s] --------------------------------
helpers_test.go:175: Cleaning up "false-816358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-816358
--- PASS: TestNetworkPlugins/group/false (4.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (45.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-024391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-024391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.661908414s)
--- PASS: TestNoKubernetes/serial/Start (45.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-024391 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-024391 "sudo systemctl is-active --quiet service kubelet": exit status 1 (169.214ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (9.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
E1025 09:26:55.919365    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (9.187437695s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-024391
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-024391: (1.399126256s)
--- PASS: TestNoKubernetes/serial/Stop (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (30.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-024391 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-024391 --driver=kvm2  --container-runtime=crio: (30.505511565s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (30.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-024391 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-024391 "sudo systemctl is-active --quiet service kubelet": exit status 1 (179.720683ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestPause/serial/Start (130.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-220312 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-220312 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m10.44800633s)
--- PASS: TestPause/serial/Start (130.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (133.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.133325489 start -p stopped-upgrade-196082 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.133325489 start -p stopped-upgrade-196082 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m37.843967005s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.133325489 -p stopped-upgrade-196082 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.133325489 -p stopped-upgrade-196082 stop: (1.805457287s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-196082 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-196082 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (34.150816039s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (133.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (111.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m51.13747024s)
--- PASS: TestNetworkPlugins/group/auto/Start (111.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-196082
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-196082: (1.064838041s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (93.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1025 09:30:18.557977    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m33.888877042s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (93.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-816358 "pgrep -a kubelet"
I1025 09:30:28.721006    9881 config.go:182] Loaded profile config "auto-816358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-816358 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mzw9v" [c1e755f3-76d3-4465-8b07-e99bbd382774] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mzw9v" [c1e755f3-76d3-4465-8b07-e99bbd382774] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005271689s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (86.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m26.926053604s)
--- PASS: TestNetworkPlugins/group/calico/Start (86.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-816358 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-816358 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-816358 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (75.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m15.530594724s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (75.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-ngcx7" [dfe05fe0-53be-4822-8944-0ea639a98b37] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005762023s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-816358 "pgrep -a kubelet"
I1025 09:31:35.564798    9881 config.go:182] Loaded profile config "kindnet-816358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-816358 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-plf6d" [2d376163-b144-4865-a20b-23f8046d4a6b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-plf6d" [2d376163-b144-4865-a20b-23f8046d4a6b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004723206s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-816358 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-816358 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-816358 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-r4942" [7b326f62-7b5d-44c2-ad2c-2af0f017437b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005263083s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (92.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m32.132193144s)
--- PASS: TestNetworkPlugins/group/bridge/Start (92.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (92.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m32.778677055s)
--- PASS: TestNetworkPlugins/group/flannel/Start (92.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-816358 "pgrep -a kubelet"
I1025 09:32:09.106672    9881 config.go:182] Loaded profile config "calico-816358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-816358 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5xpd6" [ee80730d-6037-43ca-8ba0-dc61fad84210] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5xpd6" [ee80730d-6037-43ca-8ba0-dc61fad84210] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003563306s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-816358 "pgrep -a kubelet"
I1025 09:32:10.657791    9881 config.go:182] Loaded profile config "custom-flannel-816358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-816358 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-48r2n" [907f697a-e311-4104-a4cd-82d2e6773b4d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 09:32:12.834127    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-48r2n" [907f697a-e311-4104-a4cd-82d2e6773b4d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004424502s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-816358 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-816358 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-816358 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-816358 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-816358 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-816358 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-816358 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m26.722868782s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (120.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-843693 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-843693 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (2m0.057429585s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (120.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-816358 "pgrep -a kubelet"
I1025 09:33:35.777890    9881 config.go:182] Loaded profile config "bridge-816358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-816358 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ldbbf" [cf7d78d1-ddfb-4171-892e-276d5d74b7b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ldbbf" [cf7d78d1-ddfb-4171-892e-276d5d74b7b1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.009772499s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-7vmfz" [282fc8dd-f29f-4ef2-97a2-d180fdf295cb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005863729s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-816358 "pgrep -a kubelet"
I1025 09:33:46.291393    9881 config.go:182] Loaded profile config "flannel-816358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-816358 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6sl5f" [778b2c0d-a42b-4f24-bb33-da0ecb1e3988] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6sl5f" [778b2c0d-a42b-4f24-bb33-da0ecb1e3988] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003725801s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-816358 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-816358 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-816358 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-816358 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-816358 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-816358 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-823534 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-823534 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m16.070733825s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-816358 "pgrep -a kubelet"
I1025 09:34:07.240605    9881 config.go:182] Loaded profile config "enable-default-cni-816358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-816358 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dz5x2" [53b58543-02cf-41a4-99fd-c2f705050fd3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dz5x2" [53b58543-02cf-41a4-99fd-c2f705050fd3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004270569s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (102.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-991297 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-991297 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m42.545149444s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (102.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-816358 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-816358 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-816358 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)
E1025 09:38:12.839195    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (96.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-053926 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-053926 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m36.220403654s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (96.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-843693 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6401cd41-dbe6-4b36-88e7-9541ef400f57] Pending
helpers_test.go:352: "busybox" [6401cd41-dbe6-4b36-88e7-9541ef400f57] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6401cd41-dbe6-4b36-88e7-9541ef400f57] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.005028263s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-843693 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (13.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-843693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-843693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.557497407s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-843693 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (86.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-843693 --alsologtostderr -v=3
E1025 09:35:01.633275    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-843693 --alsologtostderr -v=3: (1m26.316785492s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (86.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-823534 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b641aa6f-8bf8-4b6c-925f-3538af9c0f0e] Pending
E1025 09:35:18.557706    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/functional-897515/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [b641aa6f-8bf8-4b6c-925f-3538af9c0f0e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b641aa6f-8bf8-4b6c-925f-3538af9c0f0e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.006177307s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-823534 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-823534 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1025 09:35:28.978112    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:28.984612    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:28.996051    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:29.017587    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:29.059111    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:29.140736    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:29.302583    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-823534 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (85.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-823534 --alsologtostderr -v=3
E1025 09:35:29.623975    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:30.266320    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:31.548710    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:34.110063    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:39.231810    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:49.473488    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-823534 --alsologtostderr -v=3: (1m25.805117334s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (85.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-991297 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2dad7daf-2f9a-4a83-8e43-e4d754a9c726] Pending
helpers_test.go:352: "busybox" [2dad7daf-2f9a-4a83-8e43-e4d754a9c726] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2dad7daf-2f9a-4a83-8e43-e4d754a9c726] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004457822s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-991297 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-991297 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-991297 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (83.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-991297 --alsologtostderr -v=3
E1025 09:36:09.955170    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-991297 --alsologtostderr -v=3: (1m23.892950377s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (83.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-053926 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [33b2d2a3-cdf5-45bb-8cf0-768f53ec4d24] Pending
helpers_test.go:352: "busybox" [33b2d2a3-cdf5-45bb-8cf0-768f53ec4d24] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [33b2d2a3-cdf5-45bb-8cf0-768f53ec4d24] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003915679s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-053926 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843693 -n old-k8s-version-843693
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843693 -n old-k8s-version-843693: exit status 7 (62.935294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-843693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-843693 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-843693 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (44.03396823s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843693 -n old-k8s-version-843693
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-053926 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-053926 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (83.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-053926 --alsologtostderr -v=3
E1025 09:36:29.361360    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:29.368706    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:29.380289    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:29.401782    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:29.443331    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:29.524897    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:29.686520    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:30.008748    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:30.650163    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:31.931672    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:34.494193    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:39.616074    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:49.857458    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:50.917572    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/auto-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-053926 --alsologtostderr -v=3: (1m23.376481648s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (83.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-823534 -n no-preload-823534
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-823534 -n no-preload-823534: exit status 7 (63.627781ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-823534 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-823534 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 09:37:02.922298    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:02.928837    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:02.940440    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:02.962056    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:03.003555    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:03.085086    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:03.246739    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:03.568472    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:04.210590    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:05.491983    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-823534 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (50.238193398s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-823534 -n no-preload-823534
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-hxk57" [d36a0471-0d69-4462-b3d2-dbeecc976f1f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1025 09:37:08.053345    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:10.339412    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:10.929987    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:10.936354    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:10.947770    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:10.969641    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:11.011896    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:11.093212    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:11.255560    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:11.577383    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:12.219718    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-hxk57" [d36a0471-0d69-4462-b3d2-dbeecc976f1f] Running
E1025 09:37:12.833886    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/addons-631036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:13.175367    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:13.501444    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:16.063220    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004964186s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-hxk57" [d36a0471-0d69-4462-b3d2-dbeecc976f1f] Running
E1025 09:37:21.185212    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:23.417419    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004538486s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-843693 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-843693 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-843693 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843693 -n old-k8s-version-843693
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843693 -n old-k8s-version-843693: exit status 2 (215.557282ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-843693 -n old-k8s-version-843693
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-843693 -n old-k8s-version-843693: exit status 2 (216.081172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-843693 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843693 -n old-k8s-version-843693
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-843693 -n old-k8s-version-843693
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-675061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 09:37:31.427212    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-675061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (49.08006442s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991297 -n embed-certs-991297
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991297 -n embed-certs-991297: exit status 7 (60.85973ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-991297 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (62.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-991297 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 09:37:43.899486    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-991297 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m2.141995655s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-991297 -n embed-certs-991297
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (62.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sqvlx" [a8184d0f-af69-47ed-a30c-30a4db6f90e6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sqvlx" [a8184d0f-af69-47ed-a30c-30a4db6f90e6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.003986241s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-053926 -n default-k8s-diff-port-053926
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-053926 -n default-k8s-diff-port-053926: exit status 7 (84.528001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-053926 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (67.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-053926 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 09:37:51.300786    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:51.908700    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-053926 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m6.566071678s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-053926 -n default-k8s-diff-port-053926
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (67.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sqvlx" [a8184d0f-af69-47ed-a30c-30a4db6f90e6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.031125315s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-823534 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-823534 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-823534 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-823534 --alsologtostderr -v=1: (1.032362419s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-823534 -n no-preload-823534
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-823534 -n no-preload-823534: exit status 2 (264.566467ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-823534 -n no-preload-823534
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-823534 -n no-preload-823534: exit status 2 (254.871919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-823534 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-823534 --alsologtostderr -v=1: (1.074025421s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-823534 -n no-preload-823534
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-823534 -n no-preload-823534
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-675061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-675061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.335815319s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-675061 --alsologtostderr -v=3
E1025 09:38:24.861390    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/calico-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-675061 --alsologtostderr -v=3: (10.361843336s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-675061 -n newest-cni-675061
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-675061 -n newest-cni-675061: exit status 7 (69.781942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-675061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-675061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 09:38:32.870737    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/custom-flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-675061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (37.186980212s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-675061 -n newest-cni-675061
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rf275" [383c4bd8-8134-44cd-8e03-d16ae0df3fdf] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1025 09:38:36.096554    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/bridge-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:36.103022    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/bridge-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:36.114441    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/bridge-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:36.135977    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/bridge-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:36.177543    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/bridge-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:36.259048    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/bridge-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:36.420904    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/bridge-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:36.742715    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/bridge-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:37.385135    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/bridge-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:38.667060    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/bridge-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:40.110782    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:40.117271    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:40.128766    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:40.150325    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:40.191865    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:40.273421    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:40.435036    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:40.757204    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:41.228943    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/bridge-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:41.399564    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rf275" [383c4bd8-8134-44cd-8e03-d16ae0df3fdf] Running
E1025 09:38:42.681899    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:45.244227    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:38:46.350937    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/bridge-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.003867836s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rf275" [383c4bd8-8134-44cd-8e03-d16ae0df3fdf] Running
E1025 09:38:50.365631    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005059186s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-991297 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-991297 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-991297 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-991297 --alsologtostderr -v=1: (1.638439256s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-991297 -n embed-certs-991297
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-991297 -n embed-certs-991297: exit status 2 (245.449715ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-991297 -n embed-certs-991297
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-991297 -n embed-certs-991297: exit status 2 (254.645252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-991297 --alsologtostderr -v=1
E1025 09:38:56.592451    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/bridge-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-991297 -n embed-certs-991297
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-991297 -n embed-certs-991297
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m9nfl" [42770c31-628f-4af7-99ad-e6207cee4cee] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m9nfl" [42770c31-628f-4af7-99ad-e6207cee4cee] Running
E1025 09:39:00.606958    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/flannel-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.003538648s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m9nfl" [42770c31-628f-4af7-99ad-e6207cee4cee] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004770153s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-053926 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-675061 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-675061 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-675061 -n newest-cni-675061
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-675061 -n newest-cni-675061: exit status 2 (222.089065ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-675061 -n newest-cni-675061
E1025 09:39:07.474166    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/enable-default-cni-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:07.480569    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/enable-default-cni-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:07.491974    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/enable-default-cni-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:07.513447    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/enable-default-cni-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:07.554883    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/enable-default-cni-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:07.636421    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/enable-default-cni-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-675061 -n newest-cni-675061: exit status 2 (224.867844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-675061 --alsologtostderr -v=1
E1025 09:39:07.798099    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/enable-default-cni-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:08.119895    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/enable-default-cni-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-675061 -n newest-cni-675061
E1025 09:39:08.761845    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/enable-default-cni-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-675061 -n newest-cni-675061
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-053926 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-053926 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-053926 -n default-k8s-diff-port-053926
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-053926 -n default-k8s-diff-port-053926: exit status 2 (215.968762ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-053926 -n default-k8s-diff-port-053926
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-053926 -n default-k8s-diff-port-053926: exit status 2 (220.063894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-053926 --alsologtostderr -v=1
E1025 09:39:12.605752    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/enable-default-cni-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-053926 -n default-k8s-diff-port-053926
E1025 09:39:13.222371    9881 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/kindnet-816358/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-053926 -n default-k8s-diff-port-053926
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.50s)

                                                
                                    

Test skip (40/323)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.33
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
142 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 5.71
267 TestNetworkPlugins/group/cilium 4.47
279 TestStartStop/group/disable-driver-mounts 0.46
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-631036 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
I1025 08:40:26.992230    9881 retry.go:31] will retry after 1.577429624s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:3b217578-f7c6-4acd-991c-1ad378f8ad68 ResourceVersion:718 Generation:0 CreationTimestamp:2025-10-25 08:40:26 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-3b217578-f7c6-4acd-991c-1ad378f8ad68 StorageClassName:0xc001add490 VolumeMode:0xc001add4a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-816358 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-816358

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-816358

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-816358

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-816358

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-816358

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-816358

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-816358

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-816358

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-816358

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-816358

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-816358

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-816358" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-816358" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:25:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.84:8443
name: NoKubernetes-024391
contexts:
- context:
cluster: NoKubernetes-024391
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:25:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-024391
name: NoKubernetes-024391
current-context: NoKubernetes-024391
kind: Config
users:
- name: NoKubernetes-024391
user:
client-certificate: /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/NoKubernetes-024391/client.crt
client-key: /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/NoKubernetes-024391/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-816358

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-816358"

                                                
                                                
----------------------- debugLogs end: kubenet-816358 [took: 5.536574534s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-816358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-816358
--- SKIP: TestNetworkPlugins/group/kubenet (5.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-816358 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-816358" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21796-5973/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:25:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.84:8443
name: NoKubernetes-024391
contexts:
- context:
cluster: NoKubernetes-024391
extensions:
- extension:
last-update: Sat, 25 Oct 2025 09:25:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-024391
name: NoKubernetes-024391
current-context: NoKubernetes-024391
kind: Config
users:
- name: NoKubernetes-024391
user:
client-certificate: /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/NoKubernetes-024391/client.crt
client-key: /home/jenkins/minikube-integration/21796-5973/.minikube/profiles/NoKubernetes-024391/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-816358

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-816358" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816358"

                                                
                                                
----------------------- debugLogs end: cilium-816358 [took: 4.288806517s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-816358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-816358
--- SKIP: TestNetworkPlugins/group/cilium (4.47s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-769361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-769361
--- SKIP: TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                    
Copied to clipboard