Test Report: KVM_Linux_crio 21800

                    
                      bb40a8e434b348a4cf46a27f5566e4aff121b396:2025-10-29:42116
                    
                

Test fail (3/343)

Order failed test Duration
37 TestAddons/parallel/Ingress 156.04
244 TestPreload 142.65
299 TestPause/serial/SecondStartNoReconfiguration 66.02
x
+
TestAddons/parallel/Ingress (156.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-131912 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-131912 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-131912 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [1d81a51a-504f-47a4-81bc-5026e5bfc0e8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [1d81a51a-504f-47a4-81bc-5026e5bfc0e8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004448043s
I1029 08:24:43.665558  141231 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-131912 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.259992623s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-131912 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.91
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-131912 -n addons-131912
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-131912 logs -n 25: (1.264274367s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-019680                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-019680 │ jenkins │ v1.37.0 │ 29 Oct 25 08:21 UTC │ 29 Oct 25 08:21 UTC │
	│ start   │ --download-only -p binary-mirror-227116 --alsologtostderr --binary-mirror http://127.0.0.1:41897 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-227116 │ jenkins │ v1.37.0 │ 29 Oct 25 08:21 UTC │                     │
	│ delete  │ -p binary-mirror-227116                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-227116 │ jenkins │ v1.37.0 │ 29 Oct 25 08:21 UTC │ 29 Oct 25 08:21 UTC │
	│ addons  │ enable dashboard -p addons-131912                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:21 UTC │                     │
	│ addons  │ disable dashboard -p addons-131912                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:21 UTC │                     │
	│ start   │ -p addons-131912 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:21 UTC │ 29 Oct 25 08:23 UTC │
	│ addons  │ addons-131912 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │ 29 Oct 25 08:23 UTC │
	│ addons  │ addons-131912 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:23 UTC │ 29 Oct 25 08:24 UTC │
	│ addons  │ enable headlamp -p addons-131912 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:24 UTC │
	│ addons  │ addons-131912 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:24 UTC │
	│ addons  │ addons-131912 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:24 UTC │
	│ ssh     │ addons-131912 ssh cat /opt/local-path-provisioner/pvc-4ff904af-fa12-437d-acb0-f26b2bf41ea4_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:24 UTC │
	│ addons  │ addons-131912 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:25 UTC │
	│ addons  │ addons-131912 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:24 UTC │
	│ ip      │ addons-131912 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:24 UTC │
	│ addons  │ addons-131912 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:24 UTC │
	│ addons  │ addons-131912 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:24 UTC │
	│ addons  │ addons-131912 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:24 UTC │
	│ ssh     │ addons-131912 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │                     │
	│ addons  │ addons-131912 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:24 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-131912                                                                                                                                                                                                                                                                                                                                                                                         │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:24 UTC │
	│ addons  │ addons-131912 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:24 UTC │ 29 Oct 25 08:24 UTC │
	│ addons  │ addons-131912 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:25 UTC │ 29 Oct 25 08:25 UTC │
	│ addons  │ addons-131912 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:25 UTC │ 29 Oct 25 08:25 UTC │
	│ ip      │ addons-131912 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-131912        │ jenkins │ v1.37.0 │ 29 Oct 25 08:26 UTC │ 29 Oct 25 08:26 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:21:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:21:28.277195  141947 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:21:28.277514  141947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:21:28.277524  141947 out.go:374] Setting ErrFile to fd 2...
	I1029 08:21:28.277528  141947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:21:28.277718  141947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
	I1029 08:21:28.278229  141947 out.go:368] Setting JSON to false
	I1029 08:21:28.279119  141947 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3817,"bootTime":1761722271,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 08:21:28.279253  141947 start.go:143] virtualization: kvm guest
	I1029 08:21:28.281033  141947 out.go:179] * [addons-131912] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 08:21:28.282651  141947 notify.go:221] Checking for updates...
	I1029 08:21:28.282685  141947 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:21:28.283953  141947 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:21:28.285174  141947 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	I1029 08:21:28.286466  141947 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	I1029 08:21:28.287534  141947 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 08:21:28.288572  141947 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:21:28.289788  141947 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:21:28.319467  141947 out.go:179] * Using the kvm2 driver based on user configuration
	I1029 08:21:28.320507  141947 start.go:309] selected driver: kvm2
	I1029 08:21:28.320526  141947 start.go:930] validating driver "kvm2" against <nil>
	I1029 08:21:28.320542  141947 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:21:28.321359  141947 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 08:21:28.321650  141947 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:21:28.321692  141947 cni.go:84] Creating CNI manager for ""
	I1029 08:21:28.321744  141947 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1029 08:21:28.321753  141947 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1029 08:21:28.321790  141947 start.go:353] cluster config:
	{Name:addons-131912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-131912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1029 08:21:28.321872  141947 iso.go:125] acquiring lock: {Name:mk91f2a3d67828aaa5b9f798c71cdbe9317767a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:21:28.323904  141947 out.go:179] * Starting "addons-131912" primary control-plane node in "addons-131912" cluster
	I1029 08:21:28.324819  141947 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:21:28.324851  141947 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-137232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 08:21:28.324861  141947 cache.go:59] Caching tarball of preloaded images
	I1029 08:21:28.324951  141947 preload.go:233] Found /home/jenkins/minikube-integration/21800-137232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 08:21:28.324964  141947 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 08:21:28.325283  141947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/config.json ...
	I1029 08:21:28.325308  141947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/config.json: {Name:mk19e4dc8eb84fe730355dcf6b9a5355984afff6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:28.325485  141947 start.go:360] acquireMachinesLock for addons-131912: {Name:mkcf4e1d7f2bf8251db3d5b4273e9a32697d7a63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1029 08:21:28.325549  141947 start.go:364] duration metric: took 47.439µs to acquireMachinesLock for "addons-131912"
	I1029 08:21:28.325573  141947 start.go:93] Provisioning new machine with config: &{Name:addons-131912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-131912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:21:28.325628  141947 start.go:125] createHost starting for "" (driver="kvm2")
	I1029 08:21:28.326968  141947 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1029 08:21:28.327164  141947 start.go:159] libmachine.API.Create for "addons-131912" (driver="kvm2")
	I1029 08:21:28.327203  141947 client.go:173] LocalClient.Create starting
	I1029 08:21:28.327297  141947 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem
	I1029 08:21:28.706365  141947 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/cert.pem
	I1029 08:21:29.133699  141947 main.go:143] libmachine: creating domain...
	I1029 08:21:29.133720  141947 main.go:143] libmachine: creating network...
	I1029 08:21:29.135160  141947 main.go:143] libmachine: found existing default network
	I1029 08:21:29.135346  141947 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1029 08:21:29.135864  141947 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e490f0}
	I1029 08:21:29.135965  141947 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-131912</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1029 08:21:29.141729  141947 main.go:143] libmachine: creating private network mk-addons-131912 192.168.39.0/24...
	I1029 08:21:29.203962  141947 main.go:143] libmachine: private network mk-addons-131912 192.168.39.0/24 created
	I1029 08:21:29.204244  141947 main.go:143] libmachine: <network>
	  <name>mk-addons-131912</name>
	  <uuid>5367da55-bfe5-4e4b-aa09-9e2fe64a0273</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:ec:f0:89'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1029 08:21:29.204270  141947 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912 ...
	I1029 08:21:29.204292  141947 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21800-137232/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1029 08:21:29.204303  141947 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21800-137232/.minikube
	I1029 08:21:29.204396  141947 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21800-137232/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21800-137232/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1029 08:21:29.495614  141947 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa...
	I1029 08:21:29.884268  141947 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/addons-131912.rawdisk...
	I1029 08:21:29.884326  141947 main.go:143] libmachine: Writing magic tar header
	I1029 08:21:29.884354  141947 main.go:143] libmachine: Writing SSH key tar header
	I1029 08:21:29.884489  141947 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912 ...
	I1029 08:21:29.884599  141947 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912
	I1029 08:21:29.884658  141947 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912 (perms=drwx------)
	I1029 08:21:29.884682  141947 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21800-137232/.minikube/machines
	I1029 08:21:29.884698  141947 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21800-137232/.minikube/machines (perms=drwxr-xr-x)
	I1029 08:21:29.884710  141947 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21800-137232/.minikube
	I1029 08:21:29.884726  141947 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21800-137232/.minikube (perms=drwxr-xr-x)
	I1029 08:21:29.884754  141947 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21800-137232
	I1029 08:21:29.884776  141947 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21800-137232 (perms=drwxrwxr-x)
	I1029 08:21:29.884794  141947 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1029 08:21:29.884807  141947 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1029 08:21:29.884819  141947 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1029 08:21:29.884833  141947 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1029 08:21:29.884850  141947 main.go:143] libmachine: checking permissions on dir: /home
	I1029 08:21:29.884863  141947 main.go:143] libmachine: skipping /home - not owner
	I1029 08:21:29.884870  141947 main.go:143] libmachine: defining domain...
	I1029 08:21:29.886223  141947 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-131912</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/addons-131912.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-131912'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1029 08:21:29.894176  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:bc:ca:15 in network default
	I1029 08:21:29.894923  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:29.894944  141947 main.go:143] libmachine: starting domain...
	I1029 08:21:29.894950  141947 main.go:143] libmachine: ensuring networks are active...
	I1029 08:21:29.895796  141947 main.go:143] libmachine: Ensuring network default is active
	I1029 08:21:29.896253  141947 main.go:143] libmachine: Ensuring network mk-addons-131912 is active
	I1029 08:21:29.897003  141947 main.go:143] libmachine: getting domain XML...
	I1029 08:21:29.898226  141947 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-131912</name>
	  <uuid>82b531d1-2b07-4378-a2c5-b2b88852f51e</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/addons-131912.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:75:ae:84'/>
	      <source network='mk-addons-131912'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:bc:ca:15'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1029 08:21:31.201276  141947 main.go:143] libmachine: waiting for domain to start...
	I1029 08:21:31.202867  141947 main.go:143] libmachine: domain is now running
	I1029 08:21:31.202884  141947 main.go:143] libmachine: waiting for IP...
	I1029 08:21:31.203738  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:31.204224  141947 main.go:143] libmachine: no network interface addresses found for domain addons-131912 (source=lease)
	I1029 08:21:31.204235  141947 main.go:143] libmachine: trying to list again with source=arp
	I1029 08:21:31.204503  141947 main.go:143] libmachine: unable to find current IP address of domain addons-131912 in network mk-addons-131912 (interfaces detected: [])
	I1029 08:21:31.204545  141947 retry.go:31] will retry after 268.966595ms: waiting for domain to come up
	I1029 08:21:31.475497  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:31.476082  141947 main.go:143] libmachine: no network interface addresses found for domain addons-131912 (source=lease)
	I1029 08:21:31.476102  141947 main.go:143] libmachine: trying to list again with source=arp
	I1029 08:21:31.476452  141947 main.go:143] libmachine: unable to find current IP address of domain addons-131912 in network mk-addons-131912 (interfaces detected: [])
	I1029 08:21:31.476510  141947 retry.go:31] will retry after 283.558861ms: waiting for domain to come up
	I1029 08:21:31.762039  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:31.762733  141947 main.go:143] libmachine: no network interface addresses found for domain addons-131912 (source=lease)
	I1029 08:21:31.762761  141947 main.go:143] libmachine: trying to list again with source=arp
	I1029 08:21:31.763206  141947 main.go:143] libmachine: unable to find current IP address of domain addons-131912 in network mk-addons-131912 (interfaces detected: [])
	I1029 08:21:31.763251  141947 retry.go:31] will retry after 321.615783ms: waiting for domain to come up
	I1029 08:21:32.086923  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:32.087617  141947 main.go:143] libmachine: no network interface addresses found for domain addons-131912 (source=lease)
	I1029 08:21:32.087635  141947 main.go:143] libmachine: trying to list again with source=arp
	I1029 08:21:32.087895  141947 main.go:143] libmachine: unable to find current IP address of domain addons-131912 in network mk-addons-131912 (interfaces detected: [])
	I1029 08:21:32.087952  141947 retry.go:31] will retry after 438.745387ms: waiting for domain to come up
	I1029 08:21:32.528772  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:32.529542  141947 main.go:143] libmachine: no network interface addresses found for domain addons-131912 (source=lease)
	I1029 08:21:32.529560  141947 main.go:143] libmachine: trying to list again with source=arp
	I1029 08:21:32.529952  141947 main.go:143] libmachine: unable to find current IP address of domain addons-131912 in network mk-addons-131912 (interfaces detected: [])
	I1029 08:21:32.529999  141947 retry.go:31] will retry after 636.219704ms: waiting for domain to come up
	I1029 08:21:33.167874  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:33.168431  141947 main.go:143] libmachine: no network interface addresses found for domain addons-131912 (source=lease)
	I1029 08:21:33.168450  141947 main.go:143] libmachine: trying to list again with source=arp
	I1029 08:21:33.168752  141947 main.go:143] libmachine: unable to find current IP address of domain addons-131912 in network mk-addons-131912 (interfaces detected: [])
	I1029 08:21:33.168785  141947 retry.go:31] will retry after 944.139239ms: waiting for domain to come up
	I1029 08:21:34.114210  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:34.114740  141947 main.go:143] libmachine: no network interface addresses found for domain addons-131912 (source=lease)
	I1029 08:21:34.114754  141947 main.go:143] libmachine: trying to list again with source=arp
	I1029 08:21:34.115066  141947 main.go:143] libmachine: unable to find current IP address of domain addons-131912 in network mk-addons-131912 (interfaces detected: [])
	I1029 08:21:34.115105  141947 retry.go:31] will retry after 785.964556ms: waiting for domain to come up
	I1029 08:21:34.902840  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:34.903423  141947 main.go:143] libmachine: no network interface addresses found for domain addons-131912 (source=lease)
	I1029 08:21:34.903445  141947 main.go:143] libmachine: trying to list again with source=arp
	I1029 08:21:34.903745  141947 main.go:143] libmachine: unable to find current IP address of domain addons-131912 in network mk-addons-131912 (interfaces detected: [])
	I1029 08:21:34.903787  141947 retry.go:31] will retry after 1.260394461s: waiting for domain to come up
	I1029 08:21:36.165522  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:36.166190  141947 main.go:143] libmachine: no network interface addresses found for domain addons-131912 (source=lease)
	I1029 08:21:36.166212  141947 main.go:143] libmachine: trying to list again with source=arp
	I1029 08:21:36.166634  141947 main.go:143] libmachine: unable to find current IP address of domain addons-131912 in network mk-addons-131912 (interfaces detected: [])
	I1029 08:21:36.166681  141947 retry.go:31] will retry after 1.377888706s: waiting for domain to come up
	I1029 08:21:37.546556  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:37.547181  141947 main.go:143] libmachine: no network interface addresses found for domain addons-131912 (source=lease)
	I1029 08:21:37.547205  141947 main.go:143] libmachine: trying to list again with source=arp
	I1029 08:21:37.547563  141947 main.go:143] libmachine: unable to find current IP address of domain addons-131912 in network mk-addons-131912 (interfaces detected: [])
	I1029 08:21:37.547613  141947 retry.go:31] will retry after 2.069181521s: waiting for domain to come up
	I1029 08:21:39.618595  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:39.619315  141947 main.go:143] libmachine: no network interface addresses found for domain addons-131912 (source=lease)
	I1029 08:21:39.619332  141947 main.go:143] libmachine: trying to list again with source=arp
	I1029 08:21:39.619704  141947 main.go:143] libmachine: unable to find current IP address of domain addons-131912 in network mk-addons-131912 (interfaces detected: [])
	I1029 08:21:39.619745  141947 retry.go:31] will retry after 2.783972974s: waiting for domain to come up
	I1029 08:21:42.406480  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:42.407029  141947 main.go:143] libmachine: no network interface addresses found for domain addons-131912 (source=lease)
	I1029 08:21:42.407049  141947 main.go:143] libmachine: trying to list again with source=arp
	I1029 08:21:42.407317  141947 main.go:143] libmachine: unable to find current IP address of domain addons-131912 in network mk-addons-131912 (interfaces detected: [])
	I1029 08:21:42.407359  141947 retry.go:31] will retry after 3.406321597s: waiting for domain to come up
	I1029 08:21:45.817087  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:45.817850  141947 main.go:143] libmachine: domain addons-131912 has current primary IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:45.817872  141947 main.go:143] libmachine: found domain IP: 192.168.39.91
	I1029 08:21:45.817881  141947 main.go:143] libmachine: reserving static IP address...
	I1029 08:21:45.818224  141947 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-131912", mac: "52:54:00:75:ae:84", ip: "192.168.39.91"} in network mk-addons-131912
	I1029 08:21:46.013933  141947 main.go:143] libmachine: reserved static IP address 192.168.39.91 for domain addons-131912
	I1029 08:21:46.013956  141947 main.go:143] libmachine: waiting for SSH...
	I1029 08:21:46.013962  141947 main.go:143] libmachine: Getting to WaitForSSH function...
	I1029 08:21:46.016854  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.017316  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:minikube Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:46.017340  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.017633  141947 main.go:143] libmachine: Using SSH client type: native
	I1029 08:21:46.017992  141947 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I1029 08:21:46.018007  141947 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1029 08:21:46.126210  141947 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:21:46.126601  141947 main.go:143] libmachine: domain creation complete
	I1029 08:21:46.128150  141947 machine.go:94] provisionDockerMachine start ...
	I1029 08:21:46.130312  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.130691  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:46.130711  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.130835  141947 main.go:143] libmachine: Using SSH client type: native
	I1029 08:21:46.131020  141947 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I1029 08:21:46.131030  141947 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 08:21:46.234855  141947 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1029 08:21:46.234893  141947 buildroot.go:166] provisioning hostname "addons-131912"
	I1029 08:21:46.238029  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.238508  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:46.238543  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.238750  141947 main.go:143] libmachine: Using SSH client type: native
	I1029 08:21:46.239012  141947 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I1029 08:21:46.239034  141947 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-131912 && echo "addons-131912" | sudo tee /etc/hostname
	I1029 08:21:46.367293  141947 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-131912
	
	I1029 08:21:46.370457  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.370925  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:46.370949  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.371169  141947 main.go:143] libmachine: Using SSH client type: native
	I1029 08:21:46.371445  141947 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I1029 08:21:46.371473  141947 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-131912' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-131912/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-131912' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 08:21:46.491393  141947 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 08:21:46.491477  141947 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21800-137232/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-137232/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-137232/.minikube}
	I1029 08:21:46.491508  141947 buildroot.go:174] setting up certificates
	I1029 08:21:46.491524  141947 provision.go:84] configureAuth start
	I1029 08:21:46.494445  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.494830  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:46.494853  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.497266  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.497692  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:46.497718  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.497856  141947 provision.go:143] copyHostCerts
	I1029 08:21:46.497918  141947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-137232/.minikube/cert.pem (1123 bytes)
	I1029 08:21:46.498064  141947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-137232/.minikube/key.pem (1675 bytes)
	I1029 08:21:46.498128  141947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-137232/.minikube/ca.pem (1082 bytes)
	I1029 08:21:46.498178  141947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-137232/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca-key.pem org=jenkins.addons-131912 san=[127.0.0.1 192.168.39.91 addons-131912 localhost minikube]
	I1029 08:21:46.775091  141947 provision.go:177] copyRemoteCerts
	I1029 08:21:46.775153  141947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 08:21:46.777951  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.778347  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:46.778370  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.778554  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:21:46.860639  141947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 08:21:46.886975  141947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1029 08:21:46.912934  141947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 08:21:46.938373  141947 provision.go:87] duration metric: took 446.830797ms to configureAuth
	I1029 08:21:46.938400  141947 buildroot.go:189] setting minikube options for container-runtime
	I1029 08:21:46.938652  141947 config.go:182] Loaded profile config "addons-131912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:21:46.941776  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.942189  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:46.942223  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:46.942503  141947 main.go:143] libmachine: Using SSH client type: native
	I1029 08:21:46.942742  141947 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I1029 08:21:46.942766  141947 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 08:21:47.175197  141947 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 08:21:47.175223  141947 machine.go:97] duration metric: took 1.047057155s to provisionDockerMachine
	I1029 08:21:47.175234  141947 client.go:176] duration metric: took 18.848020169s to LocalClient.Create
	I1029 08:21:47.175254  141947 start.go:167] duration metric: took 18.848095089s to libmachine.API.Create "addons-131912"
	I1029 08:21:47.175261  141947 start.go:293] postStartSetup for "addons-131912" (driver="kvm2")
	I1029 08:21:47.175271  141947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 08:21:47.175336  141947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 08:21:47.178428  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:47.178859  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:47.178887  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:47.179036  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:21:47.262832  141947 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 08:21:47.267196  141947 info.go:137] Remote host: Buildroot 2025.02
	I1029 08:21:47.267220  141947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-137232/.minikube/addons for local assets ...
	I1029 08:21:47.267282  141947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-137232/.minikube/files for local assets ...
	I1029 08:21:47.267305  141947 start.go:296] duration metric: took 92.039262ms for postStartSetup
	I1029 08:21:47.270159  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:47.270569  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:47.270593  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:47.270794  141947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/config.json ...
	I1029 08:21:47.270952  141947 start.go:128] duration metric: took 18.945315327s to createHost
	I1029 08:21:47.272962  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:47.273336  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:47.273358  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:47.273571  141947 main.go:143] libmachine: Using SSH client type: native
	I1029 08:21:47.273763  141947 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I1029 08:21:47.273773  141947 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1029 08:21:47.377775  141947 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761726107.336897017
	
	I1029 08:21:47.377799  141947 fix.go:216] guest clock: 1761726107.336897017
	I1029 08:21:47.377810  141947 fix.go:229] Guest: 2025-10-29 08:21:47.336897017 +0000 UTC Remote: 2025-10-29 08:21:47.27097946 +0000 UTC m=+19.042272394 (delta=65.917557ms)
	I1029 08:21:47.377831  141947 fix.go:200] guest clock delta is within tolerance: 65.917557ms
	I1029 08:21:47.377840  141947 start.go:83] releasing machines lock for "addons-131912", held for 19.052277118s
	I1029 08:21:47.380513  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:47.380856  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:47.380886  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:47.381419  141947 ssh_runner.go:195] Run: cat /version.json
	I1029 08:21:47.381462  141947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 08:21:47.384454  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:47.384492  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:47.384873  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:47.384909  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:47.384916  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:47.384946  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:47.385084  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:21:47.385224  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:21:47.461569  141947 ssh_runner.go:195] Run: systemctl --version
	I1029 08:21:47.486111  141947 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 08:21:47.639120  141947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 08:21:47.645509  141947 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 08:21:47.645582  141947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 08:21:47.663871  141947 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1029 08:21:47.663891  141947 start.go:496] detecting cgroup driver to use...
	I1029 08:21:47.663961  141947 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 08:21:47.681536  141947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 08:21:47.698105  141947 docker.go:218] disabling cri-docker service (if available) ...
	I1029 08:21:47.698152  141947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 08:21:47.714171  141947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 08:21:47.729508  141947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 08:21:47.874849  141947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 08:21:48.091955  141947 docker.go:234] disabling docker service ...
	I1029 08:21:48.092036  141947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 08:21:48.109158  141947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 08:21:48.122907  141947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 08:21:48.268947  141947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 08:21:48.404153  141947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 08:21:48.419490  141947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 08:21:48.441033  141947 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 08:21:48.441098  141947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:48.452149  141947 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 08:21:48.452238  141947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:48.463723  141947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:48.474780  141947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:48.485857  141947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 08:21:48.497214  141947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:48.508134  141947 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:48.526176  141947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 08:21:48.537157  141947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 08:21:48.546384  141947 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1029 08:21:48.546438  141947 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1029 08:21:48.564769  141947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 08:21:48.575063  141947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:21:48.709366  141947 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 08:21:48.820018  141947 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 08:21:48.820123  141947 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 08:21:48.825327  141947 start.go:564] Will wait 60s for crictl version
	I1029 08:21:48.825434  141947 ssh_runner.go:195] Run: which crictl
	I1029 08:21:48.829566  141947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1029 08:21:48.865264  141947 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1029 08:21:48.865367  141947 ssh_runner.go:195] Run: crio --version
	I1029 08:21:48.892574  141947 ssh_runner.go:195] Run: crio --version
	I1029 08:21:48.921016  141947 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1029 08:21:48.924615  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:48.924957  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:21:48.924978  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:21:48.925152  141947 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1029 08:21:48.929122  141947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:21:48.942877  141947 kubeadm.go:884] updating cluster {Name:addons-131912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-131912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 08:21:48.943009  141947 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:21:48.943074  141947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:21:48.974320  141947 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1029 08:21:48.974395  141947 ssh_runner.go:195] Run: which lz4
	I1029 08:21:48.978787  141947 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1029 08:21:48.983279  141947 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1029 08:21:48.983299  141947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1029 08:21:50.300585  141947 crio.go:462] duration metric: took 1.321817507s to copy over tarball
	I1029 08:21:50.300680  141947 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1029 08:21:51.854075  141947 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.553363584s)
	I1029 08:21:51.854101  141947 crio.go:469] duration metric: took 1.553483602s to extract the tarball
	I1029 08:21:51.854108  141947 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1029 08:21:51.896269  141947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 08:21:51.940459  141947 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 08:21:51.940484  141947 cache_images.go:86] Images are preloaded, skipping loading
	I1029 08:21:51.940494  141947 kubeadm.go:935] updating node { 192.168.39.91 8443 v1.34.1 crio true true} ...
	I1029 08:21:51.940588  141947 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-131912 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-131912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 08:21:51.940654  141947 ssh_runner.go:195] Run: crio config
	I1029 08:21:51.984974  141947 cni.go:84] Creating CNI manager for ""
	I1029 08:21:51.985001  141947 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1029 08:21:51.985020  141947 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 08:21:51.985045  141947 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.91 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-131912 NodeName:addons-131912 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 08:21:51.985239  141947 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-131912"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.91"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.91"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 08:21:51.985310  141947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 08:21:51.996899  141947 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 08:21:51.996960  141947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 08:21:52.007913  141947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1029 08:21:52.027118  141947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 08:21:52.045277  141947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1029 08:21:52.063600  141947 ssh_runner.go:195] Run: grep 192.168.39.91	control-plane.minikube.internal$ /etc/hosts
	I1029 08:21:52.067297  141947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 08:21:52.080235  141947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 08:21:52.211769  141947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:21:52.239563  141947 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912 for IP: 192.168.39.91
	I1029 08:21:52.239586  141947 certs.go:195] generating shared ca certs ...
	I1029 08:21:52.239608  141947 certs.go:227] acquiring lock for ca certs: {Name:mk7a2a9c7bc52f8ce34b75ca46a18294b750be87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:52.240524  141947 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-137232/.minikube/ca.key
	I1029 08:21:52.572358  141947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-137232/.minikube/ca.crt ...
	I1029 08:21:52.572389  141947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/ca.crt: {Name:mkc23576a938ec00dffdfd19659f88d225d30114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:52.572581  141947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-137232/.minikube/ca.key ...
	I1029 08:21:52.572593  141947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/ca.key: {Name:mk4b752764e81cdfb5e980d8265dcd76015edba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:52.573377  141947 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.key
	I1029 08:21:53.249903  141947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.crt ...
	I1029 08:21:53.249935  141947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.crt: {Name:mka93e04ff02422ca636c0b002532a0855c6fcc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:53.250803  141947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.key ...
	I1029 08:21:53.250819  141947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.key: {Name:mk4ce20a4684e142cea26d415dd49815a18d5818 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:53.250926  141947 certs.go:257] generating profile certs ...
	I1029 08:21:53.250986  141947 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.key
	I1029 08:21:53.251000  141947 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt with IP's: []
	I1029 08:21:53.352042  141947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt ...
	I1029 08:21:53.352067  141947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: {Name:mk49eb8b31b77998df864f518f856017016df705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:53.352218  141947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.key ...
	I1029 08:21:53.352232  141947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.key: {Name:mk35d2a9d63d83835d4fb8fd82b91ba9ea999684 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:53.352301  141947 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/apiserver.key.7b62dfe6
	I1029 08:21:53.352327  141947 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/apiserver.crt.7b62dfe6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.91]
	I1029 08:21:53.810286  141947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/apiserver.crt.7b62dfe6 ...
	I1029 08:21:53.810315  141947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/apiserver.crt.7b62dfe6: {Name:mk5269bbf320df8b86d9f29e7339d7e3cd86942d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:53.811225  141947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/apiserver.key.7b62dfe6 ...
	I1029 08:21:53.811242  141947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/apiserver.key.7b62dfe6: {Name:mk40240114a4b095baead27cf7a566081bea1b33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:53.811312  141947 certs.go:382] copying /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/apiserver.crt.7b62dfe6 -> /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/apiserver.crt
	I1029 08:21:53.811380  141947 certs.go:386] copying /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/apiserver.key.7b62dfe6 -> /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/apiserver.key
	I1029 08:21:53.811442  141947 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/proxy-client.key
	I1029 08:21:53.811461  141947 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/proxy-client.crt with IP's: []
	I1029 08:21:53.882352  141947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/proxy-client.crt ...
	I1029 08:21:53.882377  141947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/proxy-client.crt: {Name:mk54c4ddf9d9851d34b4ba0407e070cef8a651b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:53.882523  141947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/proxy-client.key ...
	I1029 08:21:53.882536  141947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/proxy-client.key: {Name:mk1e5d0f1d10480aa24d1da90ee84c4acf2c4f8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:21:53.882696  141947 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 08:21:53.882736  141947 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem (1082 bytes)
	I1029 08:21:53.882764  141947 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/cert.pem (1123 bytes)
	I1029 08:21:53.882785  141947 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/key.pem (1675 bytes)
	I1029 08:21:53.883489  141947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 08:21:53.927659  141947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1029 08:21:53.960548  141947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 08:21:53.988943  141947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 08:21:54.015572  141947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1029 08:21:54.042184  141947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 08:21:54.068951  141947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 08:21:54.095356  141947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 08:21:54.122863  141947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 08:21:54.149061  141947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 08:21:54.170133  141947 ssh_runner.go:195] Run: openssl version
	I1029 08:21:54.176084  141947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 08:21:54.190225  141947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:21:54.195232  141947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:21:54.195292  141947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 08:21:54.203288  141947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 08:21:54.217088  141947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 08:21:54.221796  141947 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1029 08:21:54.221847  141947 kubeadm.go:401] StartCluster: {Name:addons-131912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-131912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:21:54.221913  141947 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 08:21:54.221988  141947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 08:21:54.257853  141947 cri.go:89] found id: ""
	I1029 08:21:54.257927  141947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 08:21:54.269450  141947 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 08:21:54.280706  141947 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 08:21:54.291598  141947 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1029 08:21:54.291620  141947 kubeadm.go:158] found existing configuration files:
	
	I1029 08:21:54.291682  141947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1029 08:21:54.302001  141947 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1029 08:21:54.302063  141947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1029 08:21:54.312764  141947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1029 08:21:54.323033  141947 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1029 08:21:54.323084  141947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 08:21:54.333964  141947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1029 08:21:54.344127  141947 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1029 08:21:54.344200  141947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 08:21:54.355090  141947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1029 08:21:54.365513  141947 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1029 08:21:54.365568  141947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 08:21:54.376082  141947 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1029 08:21:54.430576  141947 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1029 08:21:54.430663  141947 kubeadm.go:319] [preflight] Running pre-flight checks
	I1029 08:21:54.518719  141947 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1029 08:21:54.518872  141947 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1029 08:21:54.519023  141947 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1029 08:21:54.530325  141947 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1029 08:21:54.606578  141947 out.go:252]   - Generating certificates and keys ...
	I1029 08:21:54.606712  141947 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1029 08:21:54.606790  141947 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1029 08:21:54.723610  141947 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1029 08:21:54.879777  141947 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1029 08:21:55.305970  141947 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1029 08:21:55.609100  141947 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1029 08:21:55.707121  141947 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1029 08:21:55.707252  141947 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-131912 localhost] and IPs [192.168.39.91 127.0.0.1 ::1]
	I1029 08:21:55.881907  141947 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1029 08:21:55.882093  141947 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-131912 localhost] and IPs [192.168.39.91 127.0.0.1 ::1]
	I1029 08:21:55.964161  141947 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1029 08:21:56.445501  141947 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1029 08:21:56.695886  141947 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1029 08:21:56.695984  141947 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1029 08:21:56.868653  141947 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1029 08:21:57.081478  141947 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1029 08:21:57.125321  141947 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1029 08:21:57.270783  141947 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1029 08:21:58.286909  141947 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1029 08:21:58.287038  141947 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1029 08:21:58.288847  141947 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1029 08:21:58.290330  141947 out.go:252]   - Booting up control plane ...
	I1029 08:21:58.290456  141947 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1029 08:21:58.290559  141947 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1029 08:21:58.292688  141947 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1029 08:21:58.316082  141947 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1029 08:21:58.316265  141947 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1029 08:21:58.322574  141947 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1029 08:21:58.322753  141947 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1029 08:21:58.322866  141947 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1029 08:21:58.468610  141947 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1029 08:21:58.468732  141947 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1029 08:21:59.470657  141947 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00312827s
	I1029 08:21:59.473485  141947 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1029 08:21:59.473611  141947 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.91:8443/livez
	I1029 08:21:59.473768  141947 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1029 08:21:59.473888  141947 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1029 08:22:03.545527  141947 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.075668419s
	I1029 08:22:03.602680  141947 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.132950241s
	I1029 08:22:04.971001  141947 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502308522s
	I1029 08:22:04.983381  141947 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1029 08:22:05.000084  141947 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1029 08:22:05.012769  141947 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1029 08:22:05.012983  141947 kubeadm.go:319] [mark-control-plane] Marking the node addons-131912 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1029 08:22:05.023876  141947 kubeadm.go:319] [bootstrap-token] Using token: me0zkc.st6pkgdmib6uiy1c
	I1029 08:22:05.024958  141947 out.go:252]   - Configuring RBAC rules ...
	I1029 08:22:05.025075  141947 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1029 08:22:05.029586  141947 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1029 08:22:05.035994  141947 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1029 08:22:05.038786  141947 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1029 08:22:05.044857  141947 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1029 08:22:05.047960  141947 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1029 08:22:05.381190  141947 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1029 08:22:05.828154  141947 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1029 08:22:06.379670  141947 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1029 08:22:06.380878  141947 kubeadm.go:319] 
	I1029 08:22:06.380978  141947 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1029 08:22:06.380995  141947 kubeadm.go:319] 
	I1029 08:22:06.381106  141947 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1029 08:22:06.381116  141947 kubeadm.go:319] 
	I1029 08:22:06.381188  141947 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1029 08:22:06.381284  141947 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1029 08:22:06.381356  141947 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1029 08:22:06.381380  141947 kubeadm.go:319] 
	I1029 08:22:06.381481  141947 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1029 08:22:06.381519  141947 kubeadm.go:319] 
	I1029 08:22:06.381589  141947 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1029 08:22:06.381599  141947 kubeadm.go:319] 
	I1029 08:22:06.381670  141947 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1029 08:22:06.381767  141947 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1029 08:22:06.381864  141947 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1029 08:22:06.381873  141947 kubeadm.go:319] 
	I1029 08:22:06.381994  141947 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1029 08:22:06.382100  141947 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1029 08:22:06.382110  141947 kubeadm.go:319] 
	I1029 08:22:06.382223  141947 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token me0zkc.st6pkgdmib6uiy1c \
	I1029 08:22:06.382361  141947 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fec88f9e43a6664e89cf9c684f195947a6147e7c18413ec6f791879b45ef2f6f \
	I1029 08:22:06.382413  141947 kubeadm.go:319] 	--control-plane 
	I1029 08:22:06.382423  141947 kubeadm.go:319] 
	I1029 08:22:06.382531  141947 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1029 08:22:06.382542  141947 kubeadm.go:319] 
	I1029 08:22:06.382653  141947 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token me0zkc.st6pkgdmib6uiy1c \
	I1029 08:22:06.382798  141947 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fec88f9e43a6664e89cf9c684f195947a6147e7c18413ec6f791879b45ef2f6f 
	I1029 08:22:06.383999  141947 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1029 08:22:06.384036  141947 cni.go:84] Creating CNI manager for ""
	I1029 08:22:06.384051  141947 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1029 08:22:06.385643  141947 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1029 08:22:06.386796  141947 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1029 08:22:06.398364  141947 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1029 08:22:06.419652  141947 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 08:22:06.419796  141947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:22:06.419842  141947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-131912 minikube.k8s.io/updated_at=2025_10_29T08_22_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac minikube.k8s.io/name=addons-131912 minikube.k8s.io/primary=true
	I1029 08:22:06.477102  141947 ops.go:34] apiserver oom_adj: -16
	I1029 08:22:06.552314  141947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:22:07.053179  141947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:22:07.552787  141947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:22:08.052588  141947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:22:08.552452  141947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:22:09.052546  141947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:22:09.553172  141947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:22:10.052455  141947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:22:10.553319  141947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:22:11.052555  141947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:22:11.552767  141947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1029 08:22:11.634991  141947 kubeadm.go:1114] duration metric: took 5.215290101s to wait for elevateKubeSystemPrivileges
	I1029 08:22:11.635041  141947 kubeadm.go:403] duration metric: took 17.41319526s to StartCluster
	I1029 08:22:11.635073  141947 settings.go:142] acquiring lock: {Name:mkf57999febc1e58dfdf035d9c465d8b8e2fde1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:22:11.635235  141947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-137232/kubeconfig
	I1029 08:22:11.635799  141947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/kubeconfig: {Name:mk5d77803dd54d458a7a9c3d32d70e7b02c64781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 08:22:11.636584  141947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1029 08:22:11.636616  141947 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.91 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 08:22:11.636689  141947 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1029 08:22:11.636823  141947 config.go:182] Loaded profile config "addons-131912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:11.636840  141947 addons.go:70] Setting metrics-server=true in profile "addons-131912"
	I1029 08:22:11.636830  141947 addons.go:70] Setting yakd=true in profile "addons-131912"
	I1029 08:22:11.636882  141947 addons.go:70] Setting inspektor-gadget=true in profile "addons-131912"
	I1029 08:22:11.636893  141947 addons.go:239] Setting addon yakd=true in "addons-131912"
	I1029 08:22:11.636899  141947 addons.go:70] Setting volumesnapshots=true in profile "addons-131912"
	I1029 08:22:11.636914  141947 addons.go:239] Setting addon volumesnapshots=true in "addons-131912"
	I1029 08:22:11.636917  141947 addons.go:239] Setting addon inspektor-gadget=true in "addons-131912"
	I1029 08:22:11.636890  141947 addons.go:70] Setting default-storageclass=true in profile "addons-131912"
	I1029 08:22:11.636930  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.636915  141947 addons.go:70] Setting storage-provisioner=true in profile "addons-131912"
	I1029 08:22:11.636949  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.636955  141947 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-131912"
	I1029 08:22:11.636967  141947 addons.go:239] Setting addon storage-provisioner=true in "addons-131912"
	I1029 08:22:11.636972  141947 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-131912"
	I1029 08:22:11.636982  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.636953  141947 addons.go:70] Setting volcano=true in profile "addons-131912"
	I1029 08:22:11.637010  141947 addons.go:239] Setting addon volcano=true in "addons-131912"
	I1029 08:22:11.637024  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.637024  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.637069  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.637593  141947 addons.go:70] Setting registry=true in profile "addons-131912"
	I1029 08:22:11.637615  141947 addons.go:239] Setting addon registry=true in "addons-131912"
	I1029 08:22:11.637639  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.637657  141947 addons.go:70] Setting registry-creds=true in profile "addons-131912"
	I1029 08:22:11.637698  141947 addons.go:239] Setting addon registry-creds=true in "addons-131912"
	I1029 08:22:11.637724  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.636944  141947 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-131912"
	I1029 08:22:11.636893  141947 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-131912"
	I1029 08:22:11.636887  141947 addons.go:239] Setting addon metrics-server=true in "addons-131912"
	I1029 08:22:11.638222  141947 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-131912"
	I1029 08:22:11.638252  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.638390  141947 addons.go:70] Setting cloud-spanner=true in profile "addons-131912"
	I1029 08:22:11.638436  141947 addons.go:239] Setting addon cloud-spanner=true in "addons-131912"
	I1029 08:22:11.638470  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.638659  141947 out.go:179] * Verifying Kubernetes components...
	I1029 08:22:11.638748  141947 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-131912"
	I1029 08:22:11.638763  141947 addons.go:70] Setting ingress=true in profile "addons-131912"
	I1029 08:22:11.638776  141947 addons.go:239] Setting addon ingress=true in "addons-131912"
	I1029 08:22:11.638800  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.638840  141947 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-131912"
	I1029 08:22:11.638868  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.639111  141947 addons.go:70] Setting gcp-auth=true in profile "addons-131912"
	I1029 08:22:11.639136  141947 mustload.go:66] Loading cluster: addons-131912
	I1029 08:22:11.639332  141947 config.go:182] Loaded profile config "addons-131912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:22:11.639759  141947 addons.go:70] Setting ingress-dns=true in profile "addons-131912"
	I1029 08:22:11.639783  141947 addons.go:239] Setting addon ingress-dns=true in "addons-131912"
	I1029 08:22:11.639814  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.638683  141947 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-131912"
	I1029 08:22:11.639850  141947 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-131912"
	I1029 08:22:11.639886  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.639932  141947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1029 08:22:11.643732  141947 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1029 08:22:11.644926  141947 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 08:22:11.645003  141947 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1029 08:22:11.645009  141947 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1029 08:22:11.644927  141947 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1029 08:22:11.644928  141947 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1029 08:22:11.646016  141947 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1029 08:22:11.646032  141947 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1029 08:22:11.646048  141947 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 08:22:11.646063  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 08:22:11.646653  141947 out.go:179]   - Using image docker.io/registry:3.0.0
	I1029 08:22:11.646668  141947 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1029 08:22:11.647018  141947 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1029 08:22:11.646673  141947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1029 08:22:11.647105  141947 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1029 08:22:11.646727  141947 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1029 08:22:11.647155  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1029 08:22:11.647381  141947 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1029 08:22:11.648786  141947 addons.go:239] Setting addon default-storageclass=true in "addons-131912"
	I1029 08:22:11.648799  141947 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-131912"
	I1029 08:22:11.648819  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.648831  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.648847  141947 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1029 08:22:11.648879  141947 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1029 08:22:11.648895  141947 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1029 08:22:11.648907  141947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1029 08:22:11.648923  141947 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1029 08:22:11.649802  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1029 08:22:11.648937  141947 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1029 08:22:11.650061  141947 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1029 08:22:11.650073  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1029 08:22:11.649001  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:11.650022  141947 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1029 08:22:11.650498  141947 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1029 08:22:11.650846  141947 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1029 08:22:11.650864  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1029 08:22:11.651373  141947 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1029 08:22:11.651430  141947 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1029 08:22:11.651447  141947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1029 08:22:11.651606  141947 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:22:11.652388  141947 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1029 08:22:11.652418  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1029 08:22:11.652625  141947 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1029 08:22:11.652638  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1029 08:22:11.654302  141947 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:22:11.654311  141947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1029 08:22:11.655350  141947 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 08:22:11.655367  141947 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 08:22:11.655484  141947 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1029 08:22:11.655498  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1029 08:22:11.656382  141947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1029 08:22:11.657003  141947 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1029 08:22:11.657754  141947 out.go:179]   - Using image docker.io/busybox:stable
	I1029 08:22:11.657872  141947 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1029 08:22:11.658693  141947 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1029 08:22:11.658720  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1029 08:22:11.658698  141947 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1029 08:22:11.659481  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.660740  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.661782  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.661988  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.661996  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.662065  141947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1029 08:22:11.663693  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.663770  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.664282  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.664316  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.664332  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:11.664642  141947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1029 08:22:11.665019  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.665575  141947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1029 08:22:11.665594  141947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1029 08:22:11.665598  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.665628  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.665685  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.665732  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:11.665753  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.665834  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.665849  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.665866  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.666353  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.666364  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:11.666447  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.666600  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:11.666673  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:11.667233  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.667305  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.667626  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.667648  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.667666  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.667676  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.667770  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.667804  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.667864  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.667929  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:11.668312  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.668305  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:11.668445  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:11.668646  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:11.669236  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.669272  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.669501  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.669503  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:11.669541  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.669564  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.669859  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:11.669921  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.670487  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.670523  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.670646  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.670687  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.670737  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:11.670967  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:11.671236  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.671640  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.671666  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.671838  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:11.671875  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.672261  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:11.672282  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:11.672436  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	W1029 08:22:11.920478  141947 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47298->192.168.39.91:22: read: connection reset by peer
	I1029 08:22:11.920514  141947 retry.go:31] will retry after 279.281733ms: ssh: handshake failed: read tcp 192.168.39.1:47298->192.168.39.91:22: read: connection reset by peer
	I1029 08:22:12.328943  141947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 08:22:12.328954  141947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1029 08:22:12.427723  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1029 08:22:12.437923  141947 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1029 08:22:12.437948  141947 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1029 08:22:12.480544  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 08:22:12.518577  141947 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:12.518603  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1029 08:22:12.551796  141947 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1029 08:22:12.551822  141947 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1029 08:22:12.565489  141947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1029 08:22:12.565522  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1029 08:22:12.604564  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1029 08:22:12.612148  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1029 08:22:12.627717  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1029 08:22:12.651027  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1029 08:22:12.652807  141947 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1029 08:22:12.652827  141947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1029 08:22:12.671818  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1029 08:22:12.737502  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 08:22:12.754312  141947 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1029 08:22:12.754340  141947 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1029 08:22:12.851801  141947 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1029 08:22:12.851834  141947 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1029 08:22:13.087473  141947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1029 08:22:13.087516  141947 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1029 08:22:13.181375  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:13.195328  141947 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1029 08:22:13.195353  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1029 08:22:13.229246  141947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1029 08:22:13.229277  141947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1029 08:22:13.296098  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1029 08:22:13.348779  141947 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1029 08:22:13.348805  141947 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1029 08:22:13.403781  141947 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1029 08:22:13.403813  141947 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1029 08:22:13.471467  141947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1029 08:22:13.471493  141947 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1029 08:22:13.527078  141947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1029 08:22:13.527108  141947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1029 08:22:13.600327  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1029 08:22:13.627630  141947 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1029 08:22:13.627657  141947 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1029 08:22:13.706147  141947 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1029 08:22:13.706183  141947 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1029 08:22:13.815577  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1029 08:22:13.890035  141947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1029 08:22:13.890076  141947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1029 08:22:14.041304  141947 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1029 08:22:14.041328  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1029 08:22:14.085070  141947 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1029 08:22:14.085100  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1029 08:22:14.248011  141947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1029 08:22:14.248041  141947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1029 08:22:14.325367  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1029 08:22:14.362826  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1029 08:22:14.422680  141947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1029 08:22:14.422715  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1029 08:22:14.618728  141947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1029 08:22:14.618774  141947 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1029 08:22:14.946452  141947 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.617378963s)
	I1029 08:22:14.946477  141947 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.617490603s)
	I1029 08:22:14.946497  141947 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1029 08:22:14.947253  141947 node_ready.go:35] waiting up to 6m0s for node "addons-131912" to be "Ready" ...
	I1029 08:22:14.949062  141947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1029 08:22:14.949085  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1029 08:22:14.981746  141947 node_ready.go:49] node "addons-131912" is "Ready"
	I1029 08:22:14.981789  141947 node_ready.go:38] duration metric: took 34.507708ms for node "addons-131912" to be "Ready" ...
	I1029 08:22:14.981807  141947 api_server.go:52] waiting for apiserver process to appear ...
	I1029 08:22:14.981877  141947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:22:15.252601  141947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1029 08:22:15.252626  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1029 08:22:15.375972  141947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1029 08:22:15.376006  141947 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1029 08:22:15.465971  141947 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-131912" context rescaled to 1 replicas
	I1029 08:22:15.812159  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1029 08:22:16.516747  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.088969413s)
	I1029 08:22:17.053516  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.572934121s)
	I1029 08:22:19.100604  141947 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1029 08:22:19.103835  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:19.104321  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:19.104350  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:19.104554  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:19.322606  141947 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1029 08:22:19.399127  141947 addons.go:239] Setting addon gcp-auth=true in "addons-131912"
	I1029 08:22:19.399215  141947 host.go:66] Checking if "addons-131912" exists ...
	I1029 08:22:19.401648  141947 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1029 08:22:19.404624  141947 main.go:143] libmachine: domain addons-131912 has defined MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:19.405176  141947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:ae:84", ip: ""} in network mk-addons-131912: {Iface:virbr1 ExpiryTime:2025-10-29 09:21:44 +0000 UTC Type:0 Mac:52:54:00:75:ae:84 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:addons-131912 Clientid:01:52:54:00:75:ae:84}
	I1029 08:22:19.405218  141947 main.go:143] libmachine: domain addons-131912 has defined IP address 192.168.39.91 and MAC address 52:54:00:75:ae:84 in network mk-addons-131912
	I1029 08:22:19.405465  141947 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/addons-131912/id_rsa Username:docker}
	I1029 08:22:19.632815  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.028201401s)
	I1029 08:22:19.632842  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.02066131s)
	I1029 08:22:19.632865  141947 addons.go:480] Verifying addon ingress=true in "addons-131912"
	I1029 08:22:19.632901  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.005150248s)
	I1029 08:22:19.633012  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.961174926s)
	I1029 08:22:19.633067  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.895538856s)
	I1029 08:22:19.632961  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.981907309s)
	I1029 08:22:19.633184  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.451776795s)
	W1029 08:22:19.633221  141947 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:19.633247  141947 retry.go:31] will retry after 227.655801ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:19.633254  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.337114762s)
	I1029 08:22:19.633303  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.032948393s)
	I1029 08:22:19.633324  141947 addons.go:480] Verifying addon registry=true in "addons-131912"
	I1029 08:22:19.633371  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.817762169s)
	I1029 08:22:19.633484  141947 addons.go:480] Verifying addon metrics-server=true in "addons-131912"
	I1029 08:22:19.633510  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.308098447s)
	W1029 08:22:19.633544  141947 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1029 08:22:19.633561  141947 retry.go:31] will retry after 214.020166ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1029 08:22:19.633567  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.270705274s)
	I1029 08:22:19.633647  141947 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.651744981s)
	I1029 08:22:19.633710  141947 api_server.go:72] duration metric: took 7.997060123s to wait for apiserver process to appear ...
	I1029 08:22:19.633718  141947 api_server.go:88] waiting for apiserver healthz status ...
	I1029 08:22:19.633741  141947 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I1029 08:22:19.634903  141947 out.go:179] * Verifying ingress addon...
	I1029 08:22:19.635578  141947 out.go:179] * Verifying registry addon...
	I1029 08:22:19.635592  141947 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-131912 service yakd-dashboard -n yakd-dashboard
	
	I1029 08:22:19.637132  141947 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1029 08:22:19.637972  141947 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1029 08:22:19.668116  141947 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1029 08:22:19.668144  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:19.669048  141947 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1029 08:22:19.669073  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1029 08:22:19.670991  141947 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1029 08:22:19.686015  141947 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I1029 08:22:19.687254  141947 api_server.go:141] control plane version: v1.34.1
	I1029 08:22:19.687277  141947 api_server.go:131] duration metric: took 53.548507ms to wait for apiserver health ...
	I1029 08:22:19.687286  141947 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 08:22:19.733424  141947 system_pods.go:59] 17 kube-system pods found
	I1029 08:22:19.733474  141947 system_pods.go:61] "amd-gpu-device-plugin-sj55d" [adb9b430-f96e-4aa2-85cb-3ef45408d687] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1029 08:22:19.733486  141947 system_pods.go:61] "coredns-66bc5c9577-mhr2w" [5a334934-479c-460e-b11a-9762a9437079] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:19.733499  141947 system_pods.go:61] "coredns-66bc5c9577-pgbfp" [6818ea03-3262-4c39-9494-bd3df59e337c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:19.733510  141947 system_pods.go:61] "etcd-addons-131912" [57765a7e-fa9f-4e11-8506-bb3c45ba4445] Running
	I1029 08:22:19.733516  141947 system_pods.go:61] "kube-apiserver-addons-131912" [9bd1fa97-68e1-4b01-97d8-4598b0ef1b85] Running
	I1029 08:22:19.733525  141947 system_pods.go:61] "kube-controller-manager-addons-131912" [0a47b265-8411-4d54-b566-c9286213e3f9] Running
	I1029 08:22:19.733533  141947 system_pods.go:61] "kube-ingress-dns-minikube" [d6508363-a1cd-40ee-9b98-8c2c889e6943] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:19.733541  141947 system_pods.go:61] "kube-proxy-m64np" [7441ef9e-b4ac-4490-84f0-33c2fa61d5b6] Running
	I1029 08:22:19.733546  141947 system_pods.go:61] "kube-scheduler-addons-131912" [06966904-7980-4183-8b48-bad550d28991] Running
	I1029 08:22:19.733552  141947 system_pods.go:61] "metrics-server-85b7d694d7-v6wds" [18d952dc-36d8-4941-857e-e4559143b825] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:19.733558  141947 system_pods.go:61] "nvidia-device-plugin-daemonset-rxgd9" [d4d0fa6e-d26a-4f7e-ad6c-a4df4c9154ed] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:19.733567  141947 system_pods.go:61] "registry-6b586f9694-brxqs" [217b4645-7132-4c56-b6ad-dbce444c774f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:19.733573  141947 system_pods.go:61] "registry-creds-764b6fb674-m9dgl" [6f4b50eb-088d-4576-84a6-96e1318e3fef] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:19.733579  141947 system_pods.go:61] "registry-proxy-swmwd" [ebca9020-6bfa-4f43-82b9-f44f5142467e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:19.733594  141947 system_pods.go:61] "snapshot-controller-7d9fbc56b8-hxlpr" [5cdc8a0b-0652-4776-8610-9e2994336bc9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:19.733605  141947 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qvht4" [c4d63fa0-483d-4f41-85ef-2d2aa909487f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:19.733616  141947 system_pods.go:61] "storage-provisioner" [602311d9-bb0f-4cf9-a941-65904652d8ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:22:19.733627  141947 system_pods.go:74] duration metric: took 46.332209ms to wait for pod list to return data ...
	I1029 08:22:19.733641  141947 default_sa.go:34] waiting for default service account to be created ...
	I1029 08:22:19.746848  141947 default_sa.go:45] found service account: "default"
	I1029 08:22:19.746879  141947 default_sa.go:55] duration metric: took 13.229234ms for default service account to be created ...
	I1029 08:22:19.746892  141947 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 08:22:19.766823  141947 system_pods.go:86] 17 kube-system pods found
	I1029 08:22:19.766855  141947 system_pods.go:89] "amd-gpu-device-plugin-sj55d" [adb9b430-f96e-4aa2-85cb-3ef45408d687] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1029 08:22:19.766863  141947 system_pods.go:89] "coredns-66bc5c9577-mhr2w" [5a334934-479c-460e-b11a-9762a9437079] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:19.766873  141947 system_pods.go:89] "coredns-66bc5c9577-pgbfp" [6818ea03-3262-4c39-9494-bd3df59e337c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 08:22:19.766880  141947 system_pods.go:89] "etcd-addons-131912" [57765a7e-fa9f-4e11-8506-bb3c45ba4445] Running
	I1029 08:22:19.766887  141947 system_pods.go:89] "kube-apiserver-addons-131912" [9bd1fa97-68e1-4b01-97d8-4598b0ef1b85] Running
	I1029 08:22:19.766897  141947 system_pods.go:89] "kube-controller-manager-addons-131912" [0a47b265-8411-4d54-b566-c9286213e3f9] Running
	I1029 08:22:19.766907  141947 system_pods.go:89] "kube-ingress-dns-minikube" [d6508363-a1cd-40ee-9b98-8c2c889e6943] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1029 08:22:19.766915  141947 system_pods.go:89] "kube-proxy-m64np" [7441ef9e-b4ac-4490-84f0-33c2fa61d5b6] Running
	I1029 08:22:19.766921  141947 system_pods.go:89] "kube-scheduler-addons-131912" [06966904-7980-4183-8b48-bad550d28991] Running
	I1029 08:22:19.766926  141947 system_pods.go:89] "metrics-server-85b7d694d7-v6wds" [18d952dc-36d8-4941-857e-e4559143b825] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1029 08:22:19.766933  141947 system_pods.go:89] "nvidia-device-plugin-daemonset-rxgd9" [d4d0fa6e-d26a-4f7e-ad6c-a4df4c9154ed] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1029 08:22:19.766943  141947 system_pods.go:89] "registry-6b586f9694-brxqs" [217b4645-7132-4c56-b6ad-dbce444c774f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1029 08:22:19.766948  141947 system_pods.go:89] "registry-creds-764b6fb674-m9dgl" [6f4b50eb-088d-4576-84a6-96e1318e3fef] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1029 08:22:19.766954  141947 system_pods.go:89] "registry-proxy-swmwd" [ebca9020-6bfa-4f43-82b9-f44f5142467e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1029 08:22:19.766959  141947 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hxlpr" [5cdc8a0b-0652-4776-8610-9e2994336bc9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:19.766984  141947 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qvht4" [c4d63fa0-483d-4f41-85ef-2d2aa909487f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1029 08:22:19.767001  141947 system_pods.go:89] "storage-provisioner" [602311d9-bb0f-4cf9-a941-65904652d8ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 08:22:19.767014  141947 system_pods.go:126] duration metric: took 20.113568ms to wait for k8s-apps to be running ...
	I1029 08:22:19.767037  141947 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 08:22:19.767093  141947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:22:19.848194  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1029 08:22:19.861401  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:20.166857  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:20.167303  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:20.548807  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.736594242s)
	I1029 08:22:20.548849  141947 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-131912"
	I1029 08:22:20.548887  141947 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.147201488s)
	I1029 08:22:20.548913  141947 system_svc.go:56] duration metric: took 781.872793ms WaitForService to wait for kubelet
	I1029 08:22:20.548933  141947 kubeadm.go:587] duration metric: took 8.912283625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 08:22:20.549016  141947 node_conditions.go:102] verifying NodePressure condition ...
	I1029 08:22:20.550199  141947 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1029 08:22:20.550223  141947 out.go:179] * Verifying csi-hostpath-driver addon...
	I1029 08:22:20.551286  141947 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1029 08:22:20.552164  141947 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1029 08:22:20.552273  141947 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1029 08:22:20.552293  141947 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1029 08:22:20.578742  141947 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1029 08:22:20.578765  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:20.581062  141947 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1029 08:22:20.581094  141947 node_conditions.go:123] node cpu capacity is 2
	I1029 08:22:20.581135  141947 node_conditions.go:105] duration metric: took 32.111366ms to run NodePressure ...
	I1029 08:22:20.581150  141947 start.go:242] waiting for startup goroutines ...
	I1029 08:22:20.667514  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:20.671224  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:20.765536  141947 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1029 08:22:20.765562  141947 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1029 08:22:20.918514  141947 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1029 08:22:20.918539  141947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1029 08:22:21.022393  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1029 08:22:21.059139  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:21.161370  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:21.162305  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:21.556805  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:21.642313  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:21.645716  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:22.057372  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:22.142243  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:22.143234  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:22.560461  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:22.724021  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:22.724193  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:22.770437  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.92215861s)
	I1029 08:22:23.041979  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.180502261s)
	W1029 08:22:23.042030  141947 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:23.042037  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.019594405s)
	I1029 08:22:23.042053  141947 retry.go:31] will retry after 438.646721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:23.043016  141947 addons.go:480] Verifying addon gcp-auth=true in "addons-131912"
	I1029 08:22:23.044234  141947 out.go:179] * Verifying gcp-auth addon...
	I1029 08:22:23.045937  141947 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1029 08:22:23.051188  141947 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1029 08:22:23.051209  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:23.056815  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:23.153644  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:23.153918  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:23.481462  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:23.551226  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:23.557448  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:23.651699  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:23.652640  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:24.053798  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:24.056749  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:24.141863  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:24.144960  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:24.550537  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:24.556416  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:24.593524  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.111988566s)
	W1029 08:22:24.593581  141947 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:24.593610  141947 retry.go:31] will retry after 588.425928ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:24.647188  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:24.650067  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:25.051809  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:25.060262  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:25.143578  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:25.145294  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:25.182441  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:25.551568  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:25.555690  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:25.642973  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:25.645221  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:26.052879  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:26.057077  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:26.159018  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:26.159125  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:26.241790  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.059307627s)
	W1029 08:22:26.241844  141947 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:26.241865  141947 retry.go:31] will retry after 1.04844612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:26.549523  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:26.555329  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:26.641779  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:26.642591  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:27.051733  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:27.054883  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:27.141143  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:27.141981  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:27.291221  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:27.555335  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:27.557919  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:27.641078  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:27.644191  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:28.050935  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:28.055705  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:22:28.061464  141947 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:28.061498  141947 retry.go:31] will retry after 1.873716732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:28.141075  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:28.142100  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:28.551806  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:28.555376  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:28.640880  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:28.644962  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:29.052098  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:29.055831  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:29.141959  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:29.142102  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:29.557217  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:29.559065  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:29.643501  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:29.643781  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:29.936196  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:30.048830  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:30.056297  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:30.141931  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:30.143515  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:30.549859  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:30.556138  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:30.644300  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:30.644396  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:31.021512  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.085264973s)
	W1029 08:22:31.021555  141947 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:31.021576  141947 retry.go:31] will retry after 1.087576949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:31.050830  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:31.054992  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:31.143436  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:31.143533  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:31.634825  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:31.634950  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:31.734922  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:31.735901  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:32.050741  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:32.055179  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:32.110244  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:32.144812  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:32.145903  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:32.551972  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:32.556682  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:32.645739  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:32.645823  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1029 08:22:32.996758  141947 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:32.996807  141947 retry.go:31] will retry after 2.747460289s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:33.050304  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:33.055112  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:33.142443  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:33.142831  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:33.552145  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:33.557755  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:33.642709  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:33.642918  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:34.054709  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:34.057832  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:34.143213  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:34.145331  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:34.551308  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:34.555212  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:34.642891  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:34.643109  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:35.050796  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:35.060435  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:35.144437  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:35.147740  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:35.552717  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:35.555175  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:35.640940  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:35.644169  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:35.745436  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:36.051947  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:36.057556  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:36.152384  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:36.152434  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:36.553260  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:36.555991  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:22:36.584331  141947 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:36.584370  141947 retry.go:31] will retry after 5.72547109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:36.641371  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:36.642623  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:37.049463  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:37.055178  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:37.141884  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:37.142627  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:37.549818  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:37.555603  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:37.641053  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:37.642760  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:38.049151  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:38.056110  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:38.141310  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:38.141726  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:38.550951  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:38.556140  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:38.641946  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:38.642004  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:39.052859  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:39.055688  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:39.140391  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:39.142325  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:39.550999  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:39.558276  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:39.642736  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:39.644324  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:40.050233  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:40.055337  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:40.141692  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:40.143035  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:40.549219  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:40.556050  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:40.646271  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:40.647697  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:41.051769  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:41.055578  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:41.143712  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:41.145399  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:41.550321  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:41.555723  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:41.640864  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:41.642233  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:42.052249  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:42.056590  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:42.141671  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:42.145335  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:42.310578  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:42.551558  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:42.554729  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:42.644074  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:42.644106  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:43.049758  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:43.055924  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:43.140573  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:43.143387  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1029 08:22:43.255504  141947 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:43.255542  141947 retry.go:31] will retry after 8.162171017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:43.551451  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:43.555077  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:43.641995  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:43.642055  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:44.049306  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:44.056884  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:44.150617  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:44.150739  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:44.550081  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:44.555446  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:44.640793  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:44.642141  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:45.049711  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:45.055827  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:45.150732  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:45.150812  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:45.549099  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:45.555576  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:45.641313  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:45.641719  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:46.049141  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:46.055795  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:46.141500  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:46.141552  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:46.549964  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:46.556230  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:46.641326  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:46.641326  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:47.049964  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:47.055490  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:47.140187  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:47.141462  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:47.549773  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:47.555457  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:47.642284  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:47.643603  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:48.052506  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:48.055948  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:48.142013  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:48.142083  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:48.550381  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:48.555068  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:48.641285  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:48.643849  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:49.051257  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:49.055115  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:49.144290  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:49.145366  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:49.551378  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:49.556575  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:49.642279  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:49.642626  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:50.050853  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:50.056396  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:50.141028  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:50.143008  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:50.551280  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:50.556889  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:50.641614  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:50.641641  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:51.050372  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:51.055024  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:51.143390  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:51.144655  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:51.417982  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:22:51.553295  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:51.555378  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:51.640990  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:51.641509  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:52.051366  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:52.055278  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:22:52.105971  141947 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:52.106012  141947 retry.go:31] will retry after 12.337320518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:22:52.141071  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:52.141476  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:52.550050  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:52.555595  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:52.640974  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:52.641146  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:53.049547  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:53.055377  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:53.141260  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:53.142174  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:53.551772  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:53.557175  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:53.642995  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:53.643585  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:54.051420  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:54.056261  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:54.141031  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:54.142843  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:54.551091  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:54.555496  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:54.640933  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:54.644201  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:55.051698  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:55.057991  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:55.141566  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:55.142601  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:55.549595  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:55.555223  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:55.643552  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:55.643599  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:56.330148  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:56.330487  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:56.331288  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:56.331530  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:56.559279  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:56.560511  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:56.640849  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:56.641950  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:57.048962  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:57.055581  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:57.141058  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:57.142141  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:57.549513  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:57.555958  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:57.641191  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:57.641511  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:58.050242  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:58.054994  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:58.141026  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:58.142598  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:58.553886  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:58.558004  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:58.642551  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:58.645399  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:59.052745  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:59.057158  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:59.144706  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:22:59.144848  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:59.550523  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:22:59.556663  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:22:59.642688  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:22:59.644714  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:00.049151  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:00.056313  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:00.140103  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:00.141796  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:00.550335  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:00.555538  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:00.640261  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:00.642102  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:01.049527  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:01.055193  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:01.141076  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:01.142227  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:01.549094  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:01.555813  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:01.641599  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:01.642357  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:02.049792  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:02.058551  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:02.141321  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:02.143673  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:02.651716  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:02.655080  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:02.655465  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:02.656606  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:03.050422  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:03.062761  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:03.141221  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:03.143272  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:03.550111  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:03.555946  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:03.641433  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:03.642273  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:04.050012  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:04.055877  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:04.141243  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:04.142172  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:04.443555  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:23:04.550763  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:04.555285  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:04.641773  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:04.643314  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:05.050693  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:05.065627  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:23:05.118555  141947 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:23:05.118601  141947 retry.go:31] will retry after 15.514760783s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:23:05.141578  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:05.142645  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:05.550334  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:05.556228  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:05.641340  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:05.642042  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:06.050138  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:06.055995  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:06.142282  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:06.143959  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:06.555811  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:06.563186  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:06.641341  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:06.642932  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:07.050947  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:07.055767  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:07.152642  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:07.153690  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:07.549788  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:07.555662  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:07.781876  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:07.781893  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:08.048844  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:08.055330  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:08.141548  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:08.141733  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:08.550171  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:08.556514  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:08.640864  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:08.642445  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:09.050110  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:09.056277  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:09.141434  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:09.141633  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:09.552497  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:09.561042  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:09.643019  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:09.643146  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:10.050251  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:10.055309  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:10.146600  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:10.147077  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:10.550188  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:10.556743  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:10.640443  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:10.641734  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:11.049734  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:11.055226  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:11.140542  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:11.142155  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:11.549315  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:11.555472  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:11.639958  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:11.640638  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:12.048924  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:12.055771  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:12.141097  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:12.142268  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:12.550608  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:12.555800  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:12.640715  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:12.642009  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:13.051971  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:13.057540  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:13.141143  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:13.143120  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:13.551311  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:13.555755  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:13.641118  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:13.643066  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:14.053127  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:14.056383  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:14.140564  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:14.141528  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:14.556196  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:14.557849  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:14.641677  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:14.641821  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:15.049648  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:15.056767  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:15.141134  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1029 08:23:15.141457  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:15.553156  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:15.555941  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:15.643097  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:15.644249  141947 kapi.go:107] duration metric: took 56.006273184s to wait for kubernetes.io/minikube-addons=registry ...
	I1029 08:23:16.050438  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:16.056679  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:16.142059  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:16.552648  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:16.557640  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:16.640772  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:17.049500  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:17.055175  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:17.141519  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:17.551924  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:17.560889  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:17.642785  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:18.051032  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:18.061130  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:18.151390  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:18.554211  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:18.556310  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:18.642977  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:19.055843  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:19.056014  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:19.141705  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:19.553073  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:19.558208  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:19.641184  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:20.049998  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:20.057580  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:20.140289  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:20.553097  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:20.556054  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:20.634051  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:23:20.640660  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:21.052136  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:21.059723  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:21.140962  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:21.549629  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:21.555742  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:21.641682  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:21.725481  141947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.091381638s)
	W1029 08:23:21.725542  141947 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:23:21.725587  141947 retry.go:31] will retry after 18.249605182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1029 08:23:22.050166  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:22.056564  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:22.143021  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:22.551435  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:22.557513  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:22.642142  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:23.053846  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:23.055649  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:23.154131  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:23.553652  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:23.558120  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:23.641524  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:24.050846  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:24.056804  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:24.143220  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:24.551450  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:24.559453  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:24.641471  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:25.049728  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:25.055940  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:25.143700  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:25.562065  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:25.572442  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:25.659595  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:26.164179  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:26.166275  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:26.166643  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:26.552985  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:26.558367  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:26.648800  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:27.052690  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:27.065391  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:27.147851  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:27.551987  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:27.557740  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:27.643343  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:28.053441  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:28.057250  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:28.154547  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:28.551814  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:28.556080  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:28.641011  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:29.049970  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:29.055689  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:29.281946  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:29.551632  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:29.557654  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:29.642268  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:30.051481  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:30.055860  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:30.142854  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:30.549274  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:30.554816  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:30.641197  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:31.049568  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:31.056280  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:31.140271  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:31.549976  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:31.556135  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:31.641490  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:32.050804  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:32.058766  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:32.153142  141947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1029 08:23:32.550957  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:32.556529  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:32.640485  141947 kapi.go:107] duration metric: took 1m13.003351126s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1029 08:23:33.050334  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:33.056317  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:33.551124  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:33.556760  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:34.050259  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:34.055188  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:34.549753  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:34.555520  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:35.050839  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:35.059037  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:35.549842  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:35.555862  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:36.049104  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:36.056570  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:36.550487  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:36.555453  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:37.054577  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:37.059717  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:37.550474  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:37.556032  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:38.049700  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1029 08:23:38.056568  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:38.549787  141947 kapi.go:107] duration metric: took 1m15.503845243s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1029 08:23:38.551011  141947 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-131912 cluster.
	I1029 08:23:38.552294  141947 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1029 08:23:38.553610  141947 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1029 08:23:38.558731  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:39.057241  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:39.559517  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:39.975895  141947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1029 08:23:40.060632  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:40.558763  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1029 08:23:40.781051  141947 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1029 08:23:40.781205  141947 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1029 08:23:41.057363  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:41.558990  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:42.060796  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:42.555270  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:43.056710  141947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1029 08:23:43.557333  141947 kapi.go:107] duration metric: took 1m23.005168496s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1029 08:23:43.559204  141947 out.go:179] * Enabled addons: ingress-dns, storage-provisioner, registry-creds, cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1029 08:23:43.560475  141947 addons.go:515] duration metric: took 1m31.923779316s for enable addons: enabled=[ingress-dns storage-provisioner registry-creds cloud-spanner amd-gpu-device-plugin nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1029 08:23:43.560535  141947 start.go:247] waiting for cluster config update ...
	I1029 08:23:43.560561  141947 start.go:256] writing updated cluster config ...
	I1029 08:23:43.560896  141947 ssh_runner.go:195] Run: rm -f paused
	I1029 08:23:43.567447  141947 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 08:23:43.571487  141947 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mhr2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:43.576980  141947 pod_ready.go:94] pod "coredns-66bc5c9577-mhr2w" is "Ready"
	I1029 08:23:43.577004  141947 pod_ready.go:86] duration metric: took 5.490726ms for pod "coredns-66bc5c9577-mhr2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:43.579593  141947 pod_ready.go:83] waiting for pod "etcd-addons-131912" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:43.585381  141947 pod_ready.go:94] pod "etcd-addons-131912" is "Ready"
	I1029 08:23:43.585417  141947 pod_ready.go:86] duration metric: took 5.786223ms for pod "etcd-addons-131912" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:43.588128  141947 pod_ready.go:83] waiting for pod "kube-apiserver-addons-131912" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:43.592333  141947 pod_ready.go:94] pod "kube-apiserver-addons-131912" is "Ready"
	I1029 08:23:43.592357  141947 pod_ready.go:86] duration metric: took 4.206454ms for pod "kube-apiserver-addons-131912" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:43.594736  141947 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-131912" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:43.971495  141947 pod_ready.go:94] pod "kube-controller-manager-addons-131912" is "Ready"
	I1029 08:23:43.971617  141947 pod_ready.go:86] duration metric: took 376.851026ms for pod "kube-controller-manager-addons-131912" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:44.173074  141947 pod_ready.go:83] waiting for pod "kube-proxy-m64np" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:44.571991  141947 pod_ready.go:94] pod "kube-proxy-m64np" is "Ready"
	I1029 08:23:44.572022  141947 pod_ready.go:86] duration metric: took 398.918474ms for pod "kube-proxy-m64np" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:44.772119  141947 pod_ready.go:83] waiting for pod "kube-scheduler-addons-131912" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:45.171508  141947 pod_ready.go:94] pod "kube-scheduler-addons-131912" is "Ready"
	I1029 08:23:45.171542  141947 pod_ready.go:86] duration metric: took 399.387315ms for pod "kube-scheduler-addons-131912" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 08:23:45.171557  141947 pod_ready.go:40] duration metric: took 1.604073206s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 08:23:45.216561  141947 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1029 08:23:45.218455  141947 out.go:179] * Done! kubectl is now configured to use "addons-131912" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.171985516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ccb8c49-7589-4868-a982-23ea71ebb85b name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.172045483Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ccb8c49-7589-4868-a982-23ea71ebb85b name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.172346521Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16605e18e35333c6ba2011f6a7f7b761ce0a2e77239acb06d14d5aa07d44dd83,PodSandboxId:81edc2e472e80a520ee8045b913990f53d9ab38a853a25e568b13bb553c4bc76,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1761726278183648378,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d81a51a-504f-47a4-81bc-5026e5bfc0e8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e69d7c8b4c2fcc9bb59b1a46a3d5fe5f6ceec41d5c906fdf299bced1c7d0e172,PodSandboxId:e343e0e5eff3e773c261924bcbaacb6280f6fb4ccbd7d8feeebd3a74beb7ace6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761726230648496955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ab81e3e5-9e9b-468e-b6e6-98b1a48d05c6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904399d0a6ff75131cbd1a10b6bbe74fae1cbb895cb8e8365b39f1fa35d7c3eb,PodSandboxId:a394ccd2102c2aa72afcc176fc2a41bd844b5743e27e0a1f55a5df4b909e21ad,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761726211963998361,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8xxtr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 61c7eb92-cc6b-413f-9604-52e7b934a10f,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2c571392546ad6e9f8b1398b04226d5882a25fd810599a1ea655656ec077ce81,PodSandboxId:899a8f711cbc861b438a3093b320b0cfe523212bfa3a0ba9768d354ebe658b83,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761726198197212983,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl4j6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b746f0b6-4709-44cf-a64e-058b17670de1,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2093dbf1e4ca8b5c7f092b3662fe0cf96113fd2e0e1ba02c24495637725ac6d3,PodSandboxId:83487f0aa3cf50b34ad0ae7402942a456a88c4530eab572609d0efd67744bc98,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761726197959061780,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9zb9c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8de006-2135-4010-b566-d035aa8b0932,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3413252f464f6b9b0808a6f8571e7e9caf04612dc1f03478f1560bda6eb7d0ba,PodSandboxId:2444ae898b59ecc4316138e928bd5ca075288d8b1cf09e1d7585a0ce69f24faf,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761726183255226797,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-zzqv6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 23340c31-f583-4fcc-8405-02ca3871d702,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e80a15687212bb9a8434af0af9e05f2440c2549625ba59c12aa74ea621c44a,PodSandboxId:d6aad7a0066dfbc5abc025f0f4ea866a9a1d876f80c3c785a26e84e735dc2c67,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761726156014614174,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6508363-a1cd-40ee-9b98-8c2c889e6943,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6101355b4ff0c7aa6e6295a08542a2605b0ae06061a3b9ae0f0c829167671a,PodSandboxId:60b1b06cc91154bdd0a03820016c7f60415e738c9a0df6f
2d76e967c0121b835,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761726144018449125,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sj55d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb9b430-f96e-4aa2-85cb-3ef45408d687,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b87d6b04a98acae7dfba534f103612897c5f3eafae0af37f2c9ffa7fc54c16,PodSandboxId:76bec73
96958a1475b3f3b56473f134d2ba13f1833ed6b2e42004b9c6f31bfda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761726140698776631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602311d9-bb0f-4cf9-a941-65904652d8ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150201f1c3b669cf6fdbc6530a773c132db4869518bc3503b2da0b9a684d09f1,PodSandboxId:b5587fcd0f0b2b88774
c1eb9ab3d4ef7828cf42ca80eeb407deae57df776094b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761726133076556719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mhr2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a334934-479c-460e-b11a-9762a9437079,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b7c81fa840c8bcedb289e2295521250bc5662dd58c1fb9b14108c464ea9a6c,PodSandboxId:ab439ff04e1635742326d8de6b3c8f932a2f63abedd2a8033e34fd14338245b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761726132246243856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m64np,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7441ef9e-b4ac-4490-84f0-33c2fa61d5b6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edac35f27b26999983cbadfaefd65607919673df0bef8e7669b95a12d969e359,PodSandboxId:08e97fe200146cf7740adb7ae9cd12035f83eb09082339ca997a0573987d7dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761726120908206564,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f6068d928c16aa2cfdb78c04bb3d052,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc54808e8add8b77bea7d1954387d30165acad230531ba8a8e61f82948f06ba,PodSandboxId:9930c3c212adee9abdcc231471b07ba7ea517cba7c2d18dbb962d1e9b0b6effe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761726120518117605,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20
888938bd885ccf20850423b9545f18,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2819ec85bab8b03b491ad28a0f0ecabde313814e5d7f024250e1e8f1d236abc,PodSandboxId:8ee50cc3b253f719d56218ff12ffe42daa7f020986d9b106de1d25bd6571eb66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761726120380023091,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f330a73d72f424296abc8191e560d26f,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3dfa8665ad62fe58c36a326e90824c0d5c4894ed96c0f8faf2d2ad8d9d23bde,PodSandboxId:695e4a6afdbe3f8037f961f3da493d0709473b020c21f80274cdfb5f59cc4e1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:176172611
9949956908,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e0c2dcd36751e9563c639803ab59f08,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ccb8c49-7589-4868-a982-23ea71ebb85b name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.189348817Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.219982254Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dafc5046-99b7-4703-9b47-85963ce60bf2 name=/runtime.v1.RuntimeService/Version
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.220249525Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dafc5046-99b7-4703-9b47-85963ce60bf2 name=/runtime.v1.RuntimeService/Version
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.221688405Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=066225e1-4712-4ca1-86a7-4df70931ac7f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.223372171Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761726419223340664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589266,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=066225e1-4712-4ca1-86a7-4df70931ac7f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.223950820Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=820f8757-4cf8-4a73-b2d1-1cc546eb345c name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.224023866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=820f8757-4cf8-4a73-b2d1-1cc546eb345c name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.224340476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16605e18e35333c6ba2011f6a7f7b761ce0a2e77239acb06d14d5aa07d44dd83,PodSandboxId:81edc2e472e80a520ee8045b913990f53d9ab38a853a25e568b13bb553c4bc76,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1761726278183648378,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d81a51a-504f-47a4-81bc-5026e5bfc0e8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e69d7c8b4c2fcc9bb59b1a46a3d5fe5f6ceec41d5c906fdf299bced1c7d0e172,PodSandboxId:e343e0e5eff3e773c261924bcbaacb6280f6fb4ccbd7d8feeebd3a74beb7ace6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761726230648496955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ab81e3e5-9e9b-468e-b6e6-98b1a48d05c6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904399d0a6ff75131cbd1a10b6bbe74fae1cbb895cb8e8365b39f1fa35d7c3eb,PodSandboxId:a394ccd2102c2aa72afcc176fc2a41bd844b5743e27e0a1f55a5df4b909e21ad,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761726211963998361,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8xxtr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 61c7eb92-cc6b-413f-9604-52e7b934a10f,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2c571392546ad6e9f8b1398b04226d5882a25fd810599a1ea655656ec077ce81,PodSandboxId:899a8f711cbc861b438a3093b320b0cfe523212bfa3a0ba9768d354ebe658b83,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761726198197212983,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl4j6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b746f0b6-4709-44cf-a64e-058b17670de1,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2093dbf1e4ca8b5c7f092b3662fe0cf96113fd2e0e1ba02c24495637725ac6d3,PodSandboxId:83487f0aa3cf50b34ad0ae7402942a456a88c4530eab572609d0efd67744bc98,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761726197959061780,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9zb9c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8de006-2135-4010-b566-d035aa8b0932,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3413252f464f6b9b0808a6f8571e7e9caf04612dc1f03478f1560bda6eb7d0ba,PodSandboxId:2444ae898b59ecc4316138e928bd5ca075288d8b1cf09e1d7585a0ce69f24faf,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761726183255226797,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-zzqv6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 23340c31-f583-4fcc-8405-02ca3871d702,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e80a15687212bb9a8434af0af9e05f2440c2549625ba59c12aa74ea621c44a,PodSandboxId:d6aad7a0066dfbc5abc025f0f4ea866a9a1d876f80c3c785a26e84e735dc2c67,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761726156014614174,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6508363-a1cd-40ee-9b98-8c2c889e6943,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6101355b4ff0c7aa6e6295a08542a2605b0ae06061a3b9ae0f0c829167671a,PodSandboxId:60b1b06cc91154bdd0a03820016c7f60415e738c9a0df6f
2d76e967c0121b835,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761726144018449125,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sj55d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb9b430-f96e-4aa2-85cb-3ef45408d687,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b87d6b04a98acae7dfba534f103612897c5f3eafae0af37f2c9ffa7fc54c16,PodSandboxId:76bec73
96958a1475b3f3b56473f134d2ba13f1833ed6b2e42004b9c6f31bfda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761726140698776631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602311d9-bb0f-4cf9-a941-65904652d8ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150201f1c3b669cf6fdbc6530a773c132db4869518bc3503b2da0b9a684d09f1,PodSandboxId:b5587fcd0f0b2b88774
c1eb9ab3d4ef7828cf42ca80eeb407deae57df776094b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761726133076556719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mhr2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a334934-479c-460e-b11a-9762a9437079,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b7c81fa840c8bcedb289e2295521250bc5662dd58c1fb9b14108c464ea9a6c,PodSandboxId:ab439ff04e1635742326d8de6b3c8f932a2f63abedd2a8033e34fd14338245b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761726132246243856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m64np,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7441ef9e-b4ac-4490-84f0-33c2fa61d5b6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edac35f27b26999983cbadfaefd65607919673df0bef8e7669b95a12d969e359,PodSandboxId:08e97fe200146cf7740adb7ae9cd12035f83eb09082339ca997a0573987d7dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761726120908206564,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f6068d928c16aa2cfdb78c04bb3d052,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc54808e8add8b77bea7d1954387d30165acad230531ba8a8e61f82948f06ba,PodSandboxId:9930c3c212adee9abdcc231471b07ba7ea517cba7c2d18dbb962d1e9b0b6effe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761726120518117605,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20
888938bd885ccf20850423b9545f18,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2819ec85bab8b03b491ad28a0f0ecabde313814e5d7f024250e1e8f1d236abc,PodSandboxId:8ee50cc3b253f719d56218ff12ffe42daa7f020986d9b106de1d25bd6571eb66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761726120380023091,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f330a73d72f424296abc8191e560d26f,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3dfa8665ad62fe58c36a326e90824c0d5c4894ed96c0f8faf2d2ad8d9d23bde,PodSandboxId:695e4a6afdbe3f8037f961f3da493d0709473b020c21f80274cdfb5f59cc4e1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:176172611
9949956908,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e0c2dcd36751e9563c639803ab59f08,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=820f8757-4cf8-4a73-b2d1-1cc546eb345c name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.260258404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=426805a2-c1ec-4a8c-b6f4-4a8bfca66de7 name=/runtime.v1.RuntimeService/Version
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.260334741Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=426805a2-c1ec-4a8c-b6f4-4a8bfca66de7 name=/runtime.v1.RuntimeService/Version
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.261375785Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c23f3a60-d649-47e7-8f92-688b41642a8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.263716025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761726419263686443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589266,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c23f3a60-d649-47e7-8f92-688b41642a8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.264301457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e05b7b36-b9c0-4cae-8239-0a068381b3b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.264374694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e05b7b36-b9c0-4cae-8239-0a068381b3b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.264726381Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16605e18e35333c6ba2011f6a7f7b761ce0a2e77239acb06d14d5aa07d44dd83,PodSandboxId:81edc2e472e80a520ee8045b913990f53d9ab38a853a25e568b13bb553c4bc76,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1761726278183648378,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d81a51a-504f-47a4-81bc-5026e5bfc0e8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e69d7c8b4c2fcc9bb59b1a46a3d5fe5f6ceec41d5c906fdf299bced1c7d0e172,PodSandboxId:e343e0e5eff3e773c261924bcbaacb6280f6fb4ccbd7d8feeebd3a74beb7ace6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761726230648496955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ab81e3e5-9e9b-468e-b6e6-98b1a48d05c6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904399d0a6ff75131cbd1a10b6bbe74fae1cbb895cb8e8365b39f1fa35d7c3eb,PodSandboxId:a394ccd2102c2aa72afcc176fc2a41bd844b5743e27e0a1f55a5df4b909e21ad,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761726211963998361,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8xxtr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 61c7eb92-cc6b-413f-9604-52e7b934a10f,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2c571392546ad6e9f8b1398b04226d5882a25fd810599a1ea655656ec077ce81,PodSandboxId:899a8f711cbc861b438a3093b320b0cfe523212bfa3a0ba9768d354ebe658b83,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761726198197212983,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl4j6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b746f0b6-4709-44cf-a64e-058b17670de1,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2093dbf1e4ca8b5c7f092b3662fe0cf96113fd2e0e1ba02c24495637725ac6d3,PodSandboxId:83487f0aa3cf50b34ad0ae7402942a456a88c4530eab572609d0efd67744bc98,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761726197959061780,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9zb9c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8de006-2135-4010-b566-d035aa8b0932,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3413252f464f6b9b0808a6f8571e7e9caf04612dc1f03478f1560bda6eb7d0ba,PodSandboxId:2444ae898b59ecc4316138e928bd5ca075288d8b1cf09e1d7585a0ce69f24faf,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761726183255226797,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-zzqv6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 23340c31-f583-4fcc-8405-02ca3871d702,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e80a15687212bb9a8434af0af9e05f2440c2549625ba59c12aa74ea621c44a,PodSandboxId:d6aad7a0066dfbc5abc025f0f4ea866a9a1d876f80c3c785a26e84e735dc2c67,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761726156014614174,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6508363-a1cd-40ee-9b98-8c2c889e6943,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6101355b4ff0c7aa6e6295a08542a2605b0ae06061a3b9ae0f0c829167671a,PodSandboxId:60b1b06cc91154bdd0a03820016c7f60415e738c9a0df6f
2d76e967c0121b835,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761726144018449125,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sj55d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb9b430-f96e-4aa2-85cb-3ef45408d687,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b87d6b04a98acae7dfba534f103612897c5f3eafae0af37f2c9ffa7fc54c16,PodSandboxId:76bec73
96958a1475b3f3b56473f134d2ba13f1833ed6b2e42004b9c6f31bfda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761726140698776631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602311d9-bb0f-4cf9-a941-65904652d8ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150201f1c3b669cf6fdbc6530a773c132db4869518bc3503b2da0b9a684d09f1,PodSandboxId:b5587fcd0f0b2b88774
c1eb9ab3d4ef7828cf42ca80eeb407deae57df776094b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761726133076556719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mhr2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a334934-479c-460e-b11a-9762a9437079,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b7c81fa840c8bcedb289e2295521250bc5662dd58c1fb9b14108c464ea9a6c,PodSandboxId:ab439ff04e1635742326d8de6b3c8f932a2f63abedd2a8033e34fd14338245b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761726132246243856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m64np,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7441ef9e-b4ac-4490-84f0-33c2fa61d5b6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edac35f27b26999983cbadfaefd65607919673df0bef8e7669b95a12d969e359,PodSandboxId:08e97fe200146cf7740adb7ae9cd12035f83eb09082339ca997a0573987d7dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761726120908206564,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f6068d928c16aa2cfdb78c04bb3d052,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc54808e8add8b77bea7d1954387d30165acad230531ba8a8e61f82948f06ba,PodSandboxId:9930c3c212adee9abdcc231471b07ba7ea517cba7c2d18dbb962d1e9b0b6effe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761726120518117605,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20
888938bd885ccf20850423b9545f18,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2819ec85bab8b03b491ad28a0f0ecabde313814e5d7f024250e1e8f1d236abc,PodSandboxId:8ee50cc3b253f719d56218ff12ffe42daa7f020986d9b106de1d25bd6571eb66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761726120380023091,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f330a73d72f424296abc8191e560d26f,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3dfa8665ad62fe58c36a326e90824c0d5c4894ed96c0f8faf2d2ad8d9d23bde,PodSandboxId:695e4a6afdbe3f8037f961f3da493d0709473b020c21f80274cdfb5f59cc4e1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:176172611
9949956908,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e0c2dcd36751e9563c639803ab59f08,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e05b7b36-b9c0-4cae-8239-0a068381b3b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.302091307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b72d6f7-3eaf-42a5-af22-e087ff266c28 name=/runtime.v1.RuntimeService/Version
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.302256324Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b72d6f7-3eaf-42a5-af22-e087ff266c28 name=/runtime.v1.RuntimeService/Version
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.303882607Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b74e8e7-def3-4c7a-a344-aff4ef58aeb3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.305432267Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761726419305408974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589266,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b74e8e7-def3-4c7a-a344-aff4ef58aeb3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.305919614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=458207e0-6081-4514-b4bd-3e0d3cdd69a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.306151853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=458207e0-6081-4514-b4bd-3e0d3cdd69a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 08:26:59 addons-131912 crio[810]: time="2025-10-29 08:26:59.306838290Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16605e18e35333c6ba2011f6a7f7b761ce0a2e77239acb06d14d5aa07d44dd83,PodSandboxId:81edc2e472e80a520ee8045b913990f53d9ab38a853a25e568b13bb553c4bc76,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1761726278183648378,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d81a51a-504f-47a4-81bc-5026e5bfc0e8,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e69d7c8b4c2fcc9bb59b1a46a3d5fe5f6ceec41d5c906fdf299bced1c7d0e172,PodSandboxId:e343e0e5eff3e773c261924bcbaacb6280f6fb4ccbd7d8feeebd3a74beb7ace6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761726230648496955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ab81e3e5-9e9b-468e-b6e6-98b1a48d05c6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904399d0a6ff75131cbd1a10b6bbe74fae1cbb895cb8e8365b39f1fa35d7c3eb,PodSandboxId:a394ccd2102c2aa72afcc176fc2a41bd844b5743e27e0a1f55a5df4b909e21ad,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761726211963998361,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8xxtr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 61c7eb92-cc6b-413f-9604-52e7b934a10f,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2c571392546ad6e9f8b1398b04226d5882a25fd810599a1ea655656ec077ce81,PodSandboxId:899a8f711cbc861b438a3093b320b0cfe523212bfa3a0ba9768d354ebe658b83,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761726198197212983,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tl4j6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b746f0b6-4709-44cf-a64e-058b17670de1,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2093dbf1e4ca8b5c7f092b3662fe0cf96113fd2e0e1ba02c24495637725ac6d3,PodSandboxId:83487f0aa3cf50b34ad0ae7402942a456a88c4530eab572609d0efd67744bc98,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761726197959061780,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9zb9c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8de006-2135-4010-b566-d035aa8b0932,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3413252f464f6b9b0808a6f8571e7e9caf04612dc1f03478f1560bda6eb7d0ba,PodSandboxId:2444ae898b59ecc4316138e928bd5ca075288d8b1cf09e1d7585a0ce69f24faf,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761726183255226797,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-zzqv6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 23340c31-f583-4fcc-8405-02ca3871d702,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e80a15687212bb9a8434af0af9e05f2440c2549625ba59c12aa74ea621c44a,PodSandboxId:d6aad7a0066dfbc5abc025f0f4ea866a9a1d876f80c3c785a26e84e735dc2c67,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761726156014614174,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6508363-a1cd-40ee-9b98-8c2c889e6943,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6101355b4ff0c7aa6e6295a08542a2605b0ae06061a3b9ae0f0c829167671a,PodSandboxId:60b1b06cc91154bdd0a03820016c7f60415e738c9a0df6f
2d76e967c0121b835,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761726144018449125,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sj55d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb9b430-f96e-4aa2-85cb-3ef45408d687,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b87d6b04a98acae7dfba534f103612897c5f3eafae0af37f2c9ffa7fc54c16,PodSandboxId:76bec73
96958a1475b3f3b56473f134d2ba13f1833ed6b2e42004b9c6f31bfda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761726140698776631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602311d9-bb0f-4cf9-a941-65904652d8ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150201f1c3b669cf6fdbc6530a773c132db4869518bc3503b2da0b9a684d09f1,PodSandboxId:b5587fcd0f0b2b88774
c1eb9ab3d4ef7828cf42ca80eeb407deae57df776094b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761726133076556719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mhr2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a334934-479c-460e-b11a-9762a9437079,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9b7c81fa840c8bcedb289e2295521250bc5662dd58c1fb9b14108c464ea9a6c,PodSandboxId:ab439ff04e1635742326d8de6b3c8f932a2f63abedd2a8033e34fd14338245b1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761726132246243856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m64np,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7441ef9e-b4ac-4490-84f0-33c2fa61d5b6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edac35f27b26999983cbadfaefd65607919673df0bef8e7669b95a12d969e359,PodSandboxId:08e97fe200146cf7740adb7ae9cd12035f83eb09082339ca997a0573987d7dfa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761726120908206564,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f6068d928c16aa2cfdb78c04bb3d052,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc54808e8add8b77bea7d1954387d30165acad230531ba8a8e61f82948f06ba,PodSandboxId:9930c3c212adee9abdcc231471b07ba7ea517cba7c2d18dbb962d1e9b0b6effe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761726120518117605,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20
888938bd885ccf20850423b9545f18,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2819ec85bab8b03b491ad28a0f0ecabde313814e5d7f024250e1e8f1d236abc,PodSandboxId:8ee50cc3b253f719d56218ff12ffe42daa7f020986d9b106de1d25bd6571eb66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761726120380023091,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f330a73d72f424296abc8191e560d26f,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3dfa8665ad62fe58c36a326e90824c0d5c4894ed96c0f8faf2d2ad8d9d23bde,PodSandboxId:695e4a6afdbe3f8037f961f3da493d0709473b020c21f80274cdfb5f59cc4e1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:176172611
9949956908,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-131912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e0c2dcd36751e9563c639803ab59f08,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=458207e0-6081-4514-b4bd-3e0d3cdd69a3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	16605e18e3533       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   81edc2e472e80       nginx
	e69d7c8b4c2fc       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   e343e0e5eff3e       busybox
	904399d0a6ff7       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago       Running             controller                0                   a394ccd2102c2       ingress-nginx-controller-675c5ddd98-8xxtr
	2c571392546ad       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   3 minutes ago       Exited              patch                     0                   899a8f711cbc8       ingress-nginx-admission-patch-tl4j6
	2093dbf1e4ca8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   3 minutes ago       Exited              create                    0                   83487f0aa3cf5       ingress-nginx-admission-create-9zb9c
	3413252f464f6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            3 minutes ago       Running             gadget                    0                   2444ae898b59e       gadget-zzqv6
	70e80a1568721       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   d6aad7a0066df       kube-ingress-dns-minikube
	5d6101355b4ff       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   60b1b06cc9115       amd-gpu-device-plugin-sj55d
	75b87d6b04a98       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   76bec7396958a       storage-provisioner
	150201f1c3b66       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   b5587fcd0f0b2       coredns-66bc5c9577-mhr2w
	c9b7c81fa840c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             4 minutes ago       Running             kube-proxy                0                   ab439ff04e163       kube-proxy-m64np
	edac35f27b269       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             4 minutes ago       Running             kube-scheduler            0                   08e97fe200146       kube-scheduler-addons-131912
	cdc54808e8add       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             4 minutes ago       Running             kube-controller-manager   0                   9930c3c212ade       kube-controller-manager-addons-131912
	b2819ec85bab8       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             4 minutes ago       Running             kube-apiserver            0                   8ee50cc3b253f       kube-apiserver-addons-131912
	f3dfa8665ad62       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago       Running             etcd                      0                   695e4a6afdbe3       etcd-addons-131912
	
	
	==> coredns [150201f1c3b669cf6fdbc6530a773c132db4869518bc3503b2da0b9a684d09f1] <==
	[INFO] 10.244.0.8:44011 - 27653 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00011583s
	[INFO] 10.244.0.8:44011 - 24083 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000353918s
	[INFO] 10.244.0.8:44011 - 14002 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000126875s
	[INFO] 10.244.0.8:44011 - 64770 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000437353s
	[INFO] 10.244.0.8:44011 - 40472 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000321134s
	[INFO] 10.244.0.8:44011 - 30538 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000104298s
	[INFO] 10.244.0.8:44011 - 50154 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000183825s
	[INFO] 10.244.0.8:50448 - 23053 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000119395s
	[INFO] 10.244.0.8:50448 - 23419 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000134891s
	[INFO] 10.244.0.8:39107 - 20554 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000086814s
	[INFO] 10.244.0.8:39107 - 20276 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008714s
	[INFO] 10.244.0.8:35350 - 7037 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000144309s
	[INFO] 10.244.0.8:35350 - 7267 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000124218s
	[INFO] 10.244.0.8:56362 - 60483 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000089758s
	[INFO] 10.244.0.8:56362 - 60306 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000239351s
	[INFO] 10.244.0.23:39384 - 1667 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000340485s
	[INFO] 10.244.0.23:50204 - 62621 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000138084s
	[INFO] 10.244.0.23:38697 - 41484 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000114953s
	[INFO] 10.244.0.23:50201 - 5403 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011873s
	[INFO] 10.244.0.23:35757 - 44438 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105789s
	[INFO] 10.244.0.23:36693 - 54756 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108291s
	[INFO] 10.244.0.23:47471 - 35453 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003539039s
	[INFO] 10.244.0.23:53477 - 64841 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.005097607s
	[INFO] 10.244.0.28:57894 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000904533s
	[INFO] 10.244.0.28:45154 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147339s
	
	
	==> describe nodes <==
	Name:               addons-131912
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-131912
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=addons-131912
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T08_22_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-131912
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 08:22:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-131912
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 08:26:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 08:25:09 +0000   Wed, 29 Oct 2025 08:22:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 08:25:09 +0000   Wed, 29 Oct 2025 08:22:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 08:25:09 +0000   Wed, 29 Oct 2025 08:22:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 08:25:09 +0000   Wed, 29 Oct 2025 08:22:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.91
	  Hostname:    addons-131912
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 82b531d12b074378a2c5b2b88852f51e
	  System UUID:                82b531d1-2b07-4378-a2c5-b2b88852f51e
	  Boot ID:                    5311d671-bd1c-46aa-91d3-a774a3c77091
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  default                     hello-world-app-5d498dc89-6k9zh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-zzqv6                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-8xxtr    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m40s
	  kube-system                 amd-gpu-device-plugin-sj55d                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 coredns-66bc5c9577-mhr2w                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m48s
	  kube-system                 etcd-addons-131912                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m54s
	  kube-system                 kube-apiserver-addons-131912                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-controller-manager-addons-131912        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-proxy-m64np                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-scheduler-addons-131912                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m46s  kube-proxy       
	  Normal  Starting                 4m54s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m54s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m54s  kubelet          Node addons-131912 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s  kubelet          Node addons-131912 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s  kubelet          Node addons-131912 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m53s  kubelet          Node addons-131912 status is now: NodeReady
	  Normal  RegisteredNode           4m49s  node-controller  Node addons-131912 event: Registered Node addons-131912 in Controller
	
	
	==> dmesg <==
	[  +1.307678] kauditd_printk_skb: 303 callbacks suppressed
	[  +1.218247] kauditd_printk_skb: 524 callbacks suppressed
	[  +3.936487] kauditd_printk_skb: 197 callbacks suppressed
	[  +7.914832] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.257229] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.264229] kauditd_printk_skb: 11 callbacks suppressed
	[Oct29 08:23] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.121025] kauditd_printk_skb: 107 callbacks suppressed
	[  +4.722567] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.382882] kauditd_printk_skb: 101 callbacks suppressed
	[  +1.253626] kauditd_printk_skb: 157 callbacks suppressed
	[  +4.776150] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.832824] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.490196] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.538918] kauditd_printk_skb: 38 callbacks suppressed
	[Oct29 08:24] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000028] kauditd_printk_skb: 22 callbacks suppressed
	[  +2.057888] kauditd_printk_skb: 109 callbacks suppressed
	[  +0.000047] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.878721] kauditd_printk_skb: 185 callbacks suppressed
	[  +5.857967] kauditd_printk_skb: 127 callbacks suppressed
	[  +3.897690] kauditd_printk_skb: 93 callbacks suppressed
	[ +12.133402] kauditd_printk_skb: 31 callbacks suppressed
	[Oct29 08:25] kauditd_printk_skb: 61 callbacks suppressed
	[Oct29 08:26] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [f3dfa8665ad62fe58c36a326e90824c0d5c4894ed96c0f8faf2d2ad8d9d23bde] <==
	{"level":"info","ts":"2025-10-29T08:23:07.776333Z","caller":"traceutil/trace.go:172","msg":"trace[1845887237] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:983; }","duration":"139.903154ms","start":"2025-10-29T08:23:07.636426Z","end":"2025-10-29T08:23:07.776329Z","steps":["trace[1845887237] 'range keys from in-memory index tree'  (duration: 139.856902ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T08:23:16.547631Z","caller":"traceutil/trace.go:172","msg":"trace[1818592388] transaction","detail":"{read_only:false; response_revision:1011; number_of_response:1; }","duration":"109.09129ms","start":"2025-10-29T08:23:16.438526Z","end":"2025-10-29T08:23:16.547617Z","steps":["trace[1818592388] 'process raft request'  (duration: 108.970912ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T08:23:26.152729Z","caller":"traceutil/trace.go:172","msg":"trace[1446412720] linearizableReadLoop","detail":"{readStateIndex:1123; appliedIndex:1123; }","duration":"204.468055ms","start":"2025-10-29T08:23:25.948243Z","end":"2025-10-29T08:23:26.152711Z","steps":["trace[1446412720] 'read index received'  (duration: 204.459536ms)","trace[1446412720] 'applied index is now lower than readState.Index'  (duration: 7.91µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-29T08:23:26.152921Z","caller":"traceutil/trace.go:172","msg":"trace[1615280524] transaction","detail":"{read_only:false; response_revision:1091; number_of_response:1; }","duration":"241.811051ms","start":"2025-10-29T08:23:25.911100Z","end":"2025-10-29T08:23:26.152912Z","steps":["trace[1615280524] 'process raft request'  (duration: 241.72237ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-29T08:23:26.152931Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.667478ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-d4btw\" limit:1 ","response":"range_response_count:1 size:3941"}
	{"level":"info","ts":"2025-10-29T08:23:26.152962Z","caller":"traceutil/trace.go:172","msg":"trace[1287361334] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-create-d4btw; range_end:; response_count:1; response_revision:1090; }","duration":"204.716436ms","start":"2025-10-29T08:23:25.948238Z","end":"2025-10-29T08:23:26.152954Z","steps":["trace[1287361334] 'agreement among raft nodes before linearized reading'  (duration: 204.59779ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-29T08:23:26.153112Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.589142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-29T08:23:26.153128Z","caller":"traceutil/trace.go:172","msg":"trace[881590229] range","detail":"{range_begin:/registry/replicasets; range_end:; response_count:0; response_revision:1091; }","duration":"194.611513ms","start":"2025-10-29T08:23:25.958512Z","end":"2025-10-29T08:23:26.153123Z","steps":["trace[881590229] 'agreement among raft nodes before linearized reading'  (duration: 194.575947ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-29T08:23:26.153230Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.130174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-29T08:23:26.153285Z","caller":"traceutil/trace.go:172","msg":"trace[540899672] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1091; }","duration":"101.157907ms","start":"2025-10-29T08:23:26.052087Z","end":"2025-10-29T08:23:26.153245Z","steps":["trace[540899672] 'agreement among raft nodes before linearized reading'  (duration: 101.115687ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-29T08:23:26.153234Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.911918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-29T08:23:26.154777Z","caller":"traceutil/trace.go:172","msg":"trace[1918819406] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1091; }","duration":"108.037832ms","start":"2025-10-29T08:23:26.045317Z","end":"2025-10-29T08:23:26.153355Z","steps":["trace[1918819406] 'agreement among raft nodes before linearized reading'  (duration: 107.902475ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T08:23:29.276508Z","caller":"traceutil/trace.go:172","msg":"trace[191340318] linearizableReadLoop","detail":"{readStateIndex:1137; appliedIndex:1137; }","duration":"140.527354ms","start":"2025-10-29T08:23:29.135966Z","end":"2025-10-29T08:23:29.276494Z","steps":["trace[191340318] 'read index received'  (duration: 140.523401ms)","trace[191340318] 'applied index is now lower than readState.Index'  (duration: 3.27µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-29T08:23:29.276622Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.640446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-29T08:23:29.276640Z","caller":"traceutil/trace.go:172","msg":"trace[2124739926] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1104; }","duration":"140.671857ms","start":"2025-10-29T08:23:29.135963Z","end":"2025-10-29T08:23:29.276635Z","steps":["trace[2124739926] 'agreement among raft nodes before linearized reading'  (duration: 140.615067ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T08:23:29.277313Z","caller":"traceutil/trace.go:172","msg":"trace[1938180814] transaction","detail":"{read_only:false; response_revision:1105; number_of_response:1; }","duration":"188.928535ms","start":"2025-10-29T08:23:29.088375Z","end":"2025-10-29T08:23:29.277303Z","steps":["trace[1938180814] 'process raft request'  (duration: 188.172029ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-29T08:23:33.346432Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.031588ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-29T08:23:33.346502Z","caller":"traceutil/trace.go:172","msg":"trace[2022255000] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1122; }","duration":"132.116333ms","start":"2025-10-29T08:23:33.214373Z","end":"2025-10-29T08:23:33.346490Z","steps":["trace[2022255000] 'range keys from in-memory index tree'  (duration: 131.9973ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T08:24:25.375022Z","caller":"traceutil/trace.go:172","msg":"trace[2103278464] transaction","detail":"{read_only:false; response_revision:1472; number_of_response:1; }","duration":"245.113068ms","start":"2025-10-29T08:24:25.129899Z","end":"2025-10-29T08:24:25.375012Z","steps":["trace[2103278464] 'process raft request'  (duration: 244.958324ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T08:24:25.375407Z","caller":"traceutil/trace.go:172","msg":"trace[1248184114] linearizableReadLoop","detail":"{readStateIndex:1523; appliedIndex:1523; }","duration":"161.317255ms","start":"2025-10-29T08:24:25.213412Z","end":"2025-10-29T08:24:25.374730Z","steps":["trace[1248184114] 'read index received'  (duration: 161.311694ms)","trace[1248184114] 'applied index is now lower than readState.Index'  (duration: 4.561µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-29T08:24:25.375623Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.152252ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-29T08:24:25.375661Z","caller":"traceutil/trace.go:172","msg":"trace[993959509] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1472; }","duration":"162.245262ms","start":"2025-10-29T08:24:25.213408Z","end":"2025-10-29T08:24:25.375653Z","steps":["trace[993959509] 'agreement among raft nodes before linearized reading'  (duration: 162.130957ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-29T08:24:25.375966Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.404136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-4ff904af-fa12-437d-acb0-f26b2bf41ea4\" limit:1 ","response":"range_response_count:1 size:4423"}
	{"level":"info","ts":"2025-10-29T08:24:25.376012Z","caller":"traceutil/trace.go:172","msg":"trace[560601433] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-delete-pvc-4ff904af-fa12-437d-acb0-f26b2bf41ea4; range_end:; response_count:1; response_revision:1472; }","duration":"138.457225ms","start":"2025-10-29T08:24:25.237547Z","end":"2025-10-29T08:24:25.376004Z","steps":["trace[560601433] 'agreement among raft nodes before linearized reading'  (duration: 138.307092ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-29T08:24:32.287057Z","caller":"traceutil/trace.go:172","msg":"trace[409389847] transaction","detail":"{read_only:false; response_revision:1531; number_of_response:1; }","duration":"296.877183ms","start":"2025-10-29T08:24:31.990166Z","end":"2025-10-29T08:24:32.287043Z","steps":["trace[409389847] 'process raft request'  (duration: 295.406894ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:26:59 up 5 min,  0 users,  load average: 1.07, 1.35, 0.71
	Linux addons-131912 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b2819ec85bab8b03b491ad28a0f0ecabde313814e5d7f024250e1e8f1d236abc] <==
	W1029 08:22:40.588316       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1029 08:22:40.615585       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1029 08:22:40.626898       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1029 08:23:57.975505       1 conn.go:339] Error on socket receive: read tcp 192.168.39.91:8443->192.168.39.1:35828: use of closed network connection
	E1029 08:23:58.151718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.91:8443->192.168.39.1:35860: use of closed network connection
	I1029 08:24:07.363910       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.157.201"}
	I1029 08:24:28.838491       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1029 08:24:33.421981       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1029 08:24:33.647866       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.25.174"}
	E1029 08:24:38.196675       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1029 08:24:39.618588       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1029 08:25:03.689286       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1029 08:25:03.689321       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1029 08:25:03.713923       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1029 08:25:03.713973       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1029 08:25:03.724747       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1029 08:25:03.726889       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1029 08:25:03.798932       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1029 08:25:03.799388       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1029 08:25:03.889569       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1029 08:25:03.889657       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1029 08:25:04.714687       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1029 08:25:04.888873       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1029 08:25:04.928963       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1029 08:26:58.157077       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.18.247"}
	
	
	==> kube-controller-manager [cdc54808e8add8b77bea7d1954387d30165acad230531ba8a8e61f82948f06ba] <==
	E1029 08:25:12.450702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1029 08:25:13.560672       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1029 08:25:13.561624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1029 08:25:21.465848       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1029 08:25:21.467048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1029 08:25:24.159870       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1029 08:25:24.161036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1029 08:25:25.484137       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1029 08:25:25.485129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1029 08:25:40.410557       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1029 08:25:40.411493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1029 08:25:45.306237       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1029 08:25:45.307273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1029 08:25:46.076506       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1029 08:25:46.077474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1029 08:26:24.096253       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1029 08:26:24.097296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1029 08:26:26.569100       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1029 08:26:26.570519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1029 08:26:33.923981       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1029 08:26:33.925078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1029 08:26:56.738176       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1029 08:26:56.740156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1029 08:26:59.081359       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1029 08:26:59.083865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [c9b7c81fa840c8bcedb289e2295521250bc5662dd58c1fb9b14108c464ea9a6c] <==
	I1029 08:22:12.977305       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 08:22:13.084441       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 08:22:13.084468       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.91"]
	E1029 08:22:13.084529       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 08:22:13.303210       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1029 08:22:13.303259       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1029 08:22:13.303279       1 server_linux.go:132] "Using iptables Proxier"
	I1029 08:22:13.326710       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 08:22:13.327867       1 server.go:527] "Version info" version="v1.34.1"
	I1029 08:22:13.327894       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:22:13.338089       1 config.go:200] "Starting service config controller"
	I1029 08:22:13.338114       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 08:22:13.338135       1 config.go:106] "Starting endpoint slice config controller"
	I1029 08:22:13.338139       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 08:22:13.338159       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 08:22:13.338163       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 08:22:13.339904       1 config.go:309] "Starting node config controller"
	I1029 08:22:13.339928       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 08:22:13.339935       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 08:22:13.438471       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1029 08:22:13.438520       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 08:22:13.438574       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [edac35f27b26999983cbadfaefd65607919673df0bef8e7669b95a12d969e359] <==
	I1029 08:22:03.572977       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 08:22:03.575503       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 08:22:03.575617       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1029 08:22:03.576677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1029 08:22:03.577855       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 08:22:03.578103       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1029 08:22:03.587584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1029 08:22:03.587921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1029 08:22:03.589355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1029 08:22:03.589727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1029 08:22:03.589811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1029 08:22:03.589859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1029 08:22:03.589912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1029 08:22:03.589961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1029 08:22:03.589996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1029 08:22:03.590074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1029 08:22:03.590130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1029 08:22:03.590173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1029 08:22:03.590224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1029 08:22:03.590252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1029 08:22:03.590306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1029 08:22:03.590360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1029 08:22:03.590423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1029 08:22:03.590462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1029 08:22:04.676093       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 29 08:25:16 addons-131912 kubelet[1496]: E1029 08:25:16.042430    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761726316041980342  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:25:16 addons-131912 kubelet[1496]: E1029 08:25:16.042469    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761726316041980342  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:25:26 addons-131912 kubelet[1496]: E1029 08:25:26.045344    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761726326044852663  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:25:26 addons-131912 kubelet[1496]: E1029 08:25:26.045369    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761726326044852663  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:25:36 addons-131912 kubelet[1496]: E1029 08:25:36.049956    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761726336047545684  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:25:36 addons-131912 kubelet[1496]: E1029 08:25:36.049980    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761726336047545684  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:25:46 addons-131912 kubelet[1496]: E1029 08:25:46.053145    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761726346052351312  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:25:46 addons-131912 kubelet[1496]: E1029 08:25:46.053297    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761726346052351312  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:25:56 addons-131912 kubelet[1496]: E1029 08:25:56.058339    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761726356057494916  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:25:56 addons-131912 kubelet[1496]: E1029 08:25:56.058394    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761726356057494916  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:26:06 addons-131912 kubelet[1496]: E1029 08:26:06.061417    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761726366060985570  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:26:06 addons-131912 kubelet[1496]: E1029 08:26:06.061446    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761726366060985570  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:26:16 addons-131912 kubelet[1496]: E1029 08:26:16.065302    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761726376064709322  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:26:16 addons-131912 kubelet[1496]: E1029 08:26:16.065332    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761726376064709322  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:26:17 addons-131912 kubelet[1496]: I1029 08:26:17.741308    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:26:17 addons-131912 kubelet[1496]: I1029 08:26:17.741415    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-sj55d" secret="" err="secret \"gcp-auth\" not found"
	Oct 29 08:26:26 addons-131912 kubelet[1496]: E1029 08:26:26.069828    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761726386069306081  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:26:26 addons-131912 kubelet[1496]: E1029 08:26:26.069859    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761726386069306081  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:26:36 addons-131912 kubelet[1496]: E1029 08:26:36.073026    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761726396072489383  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:26:36 addons-131912 kubelet[1496]: E1029 08:26:36.073056    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761726396072489383  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:26:46 addons-131912 kubelet[1496]: E1029 08:26:46.076652    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761726406076009790  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:26:46 addons-131912 kubelet[1496]: E1029 08:26:46.076677    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761726406076009790  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:26:56 addons-131912 kubelet[1496]: E1029 08:26:56.080303    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761726416079744150  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:26:56 addons-131912 kubelet[1496]: E1029 08:26:56.080335    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761726416079744150  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Oct 29 08:26:58 addons-131912 kubelet[1496]: I1029 08:26:58.155130    1496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gblb\" (UniqueName: \"kubernetes.io/projected/08f70bb8-44fa-467f-9377-b6372e23ff97-kube-api-access-4gblb\") pod \"hello-world-app-5d498dc89-6k9zh\" (UID: \"08f70bb8-44fa-467f-9377-b6372e23ff97\") " pod="default/hello-world-app-5d498dc89-6k9zh"
	
	
	==> storage-provisioner [75b87d6b04a98acae7dfba534f103612897c5f3eafae0af37f2c9ffa7fc54c16] <==
	W1029 08:26:34.431380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:36.435313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:36.440486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:38.443595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:38.449183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:40.452455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:40.459237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:42.463026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:42.470518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:44.474575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:44.480495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:46.484204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:46.489343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:48.493149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:48.501213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:50.504852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:50.509931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:52.513129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:52.518007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:54.520880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:54.525336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:56.530516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:56.536048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:58.541939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1029 08:26:58.550981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-131912 -n addons-131912
helpers_test.go:269: (dbg) Run:  kubectl --context addons-131912 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-6k9zh ingress-nginx-admission-create-9zb9c ingress-nginx-admission-patch-tl4j6
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-131912 describe pod hello-world-app-5d498dc89-6k9zh ingress-nginx-admission-create-9zb9c ingress-nginx-admission-patch-tl4j6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-131912 describe pod hello-world-app-5d498dc89-6k9zh ingress-nginx-admission-create-9zb9c ingress-nginx-admission-patch-tl4j6: exit status 1 (66.731183ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-6k9zh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-131912/192.168.39.91
	Start Time:       Wed, 29 Oct 2025 08:26:58 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4gblb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4gblb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-6k9zh to addons-131912
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9zb9c" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tl4j6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-131912 describe pod hello-world-app-5d498dc89-6k9zh ingress-nginx-admission-create-9zb9c ingress-nginx-admission-patch-tl4j6: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-131912 addons disable ingress-dns --alsologtostderr -v=1: (1.069538341s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-131912 addons disable ingress --alsologtostderr -v=1: (7.702285554s)
--- FAIL: TestAddons/parallel/Ingress (156.04s)

                                                
                                    
x
+
TestPreload (142.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-740345 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1029 09:11:54.221828  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-740345 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m15.609525651s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-740345 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-740345 image pull gcr.io/k8s-minikube/busybox: (4.036061909s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-740345
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-740345: (7.009402073s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-740345 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1029 09:13:45.865077  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:13:51.152232  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-740345 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (53.148373332s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-740345 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-29 09:13:55.891699544 +0000 UTC m=+3197.088362993
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-740345 -n test-preload-740345
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-740345 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-740345 logs -n 25: (1.030350761s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-181321 ssh -n multinode-181321-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:00 UTC │ 29 Oct 25 09:00 UTC │
	│ ssh     │ multinode-181321 ssh -n multinode-181321 sudo cat /home/docker/cp-test_multinode-181321-m03_multinode-181321.txt                                          │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:00 UTC │ 29 Oct 25 09:00 UTC │
	│ cp      │ multinode-181321 cp multinode-181321-m03:/home/docker/cp-test.txt multinode-181321-m02:/home/docker/cp-test_multinode-181321-m03_multinode-181321-m02.txt │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:00 UTC │ 29 Oct 25 09:00 UTC │
	│ ssh     │ multinode-181321 ssh -n multinode-181321-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:00 UTC │ 29 Oct 25 09:01 UTC │
	│ ssh     │ multinode-181321 ssh -n multinode-181321-m02 sudo cat /home/docker/cp-test_multinode-181321-m03_multinode-181321-m02.txt                                  │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:01 UTC │ 29 Oct 25 09:01 UTC │
	│ node    │ multinode-181321 node stop m03                                                                                                                            │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:01 UTC │ 29 Oct 25 09:01 UTC │
	│ node    │ multinode-181321 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:01 UTC │ 29 Oct 25 09:01 UTC │
	│ node    │ list -p multinode-181321                                                                                                                                  │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:01 UTC │                     │
	│ stop    │ -p multinode-181321                                                                                                                                       │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:01 UTC │ 29 Oct 25 09:04 UTC │
	│ start   │ -p multinode-181321 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:04 UTC │ 29 Oct 25 09:06 UTC │
	│ node    │ list -p multinode-181321                                                                                                                                  │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:06 UTC │                     │
	│ node    │ multinode-181321 node delete m03                                                                                                                          │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:06 UTC │ 29 Oct 25 09:06 UTC │
	│ stop    │ multinode-181321 stop                                                                                                                                     │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:06 UTC │ 29 Oct 25 09:09 UTC │
	│ start   │ -p multinode-181321 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:09 UTC │ 29 Oct 25 09:10 UTC │
	│ node    │ list -p multinode-181321                                                                                                                                  │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ start   │ -p multinode-181321-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-181321-m02 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │                     │
	│ start   │ -p multinode-181321-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-181321-m03 │ jenkins │ v1.37.0 │ 29 Oct 25 09:10 UTC │ 29 Oct 25 09:11 UTC │
	│ node    │ add -p multinode-181321                                                                                                                                   │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │                     │
	│ delete  │ -p multinode-181321-m03                                                                                                                                   │ multinode-181321-m03 │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ delete  │ -p multinode-181321                                                                                                                                       │ multinode-181321     │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:11 UTC │
	│ start   │ -p test-preload-740345 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-740345  │ jenkins │ v1.37.0 │ 29 Oct 25 09:11 UTC │ 29 Oct 25 09:12 UTC │
	│ image   │ test-preload-740345 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-740345  │ jenkins │ v1.37.0 │ 29 Oct 25 09:12 UTC │ 29 Oct 25 09:12 UTC │
	│ stop    │ -p test-preload-740345                                                                                                                                    │ test-preload-740345  │ jenkins │ v1.37.0 │ 29 Oct 25 09:12 UTC │ 29 Oct 25 09:13 UTC │
	│ start   │ -p test-preload-740345 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-740345  │ jenkins │ v1.37.0 │ 29 Oct 25 09:13 UTC │ 29 Oct 25 09:13 UTC │
	│ image   │ test-preload-740345 image list                                                                                                                            │ test-preload-740345  │ jenkins │ v1.37.0 │ 29 Oct 25 09:13 UTC │ 29 Oct 25 09:13 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:13:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:13:02.599518  163762 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:13:02.599823  163762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:13:02.599834  163762 out.go:374] Setting ErrFile to fd 2...
	I1029 09:13:02.599838  163762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:13:02.600062  163762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
	I1029 09:13:02.600563  163762 out.go:368] Setting JSON to false
	I1029 09:13:02.601598  163762 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6912,"bootTime":1761722271,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:13:02.601695  163762 start.go:143] virtualization: kvm guest
	I1029 09:13:02.603391  163762 out.go:179] * [test-preload-740345] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:13:02.604796  163762 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:13:02.604791  163762 notify.go:221] Checking for updates...
	I1029 09:13:02.606048  163762 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:13:02.607056  163762 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	I1029 09:13:02.608081  163762 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	I1029 09:13:02.609127  163762 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:13:02.610036  163762 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:13:02.611576  163762 config.go:182] Loaded profile config "test-preload-740345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1029 09:13:02.612973  163762 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1029 09:13:02.613951  163762 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:13:02.649415  163762 out.go:179] * Using the kvm2 driver based on existing profile
	I1029 09:13:02.650277  163762 start.go:309] selected driver: kvm2
	I1029 09:13:02.650300  163762 start.go:930] validating driver "kvm2" against &{Name:test-preload-740345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-740345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:13:02.650454  163762 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:13:02.651516  163762 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:13:02.651562  163762 cni.go:84] Creating CNI manager for ""
	I1029 09:13:02.651624  163762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1029 09:13:02.651674  163762 start.go:353] cluster config:
	{Name:test-preload-740345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-740345 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:13:02.651792  163762 iso.go:125] acquiring lock: {Name:mk91f2a3d67828aaa5b9f798c71cdbe9317767a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:13:02.652994  163762 out.go:179] * Starting "test-preload-740345" primary control-plane node in "test-preload-740345" cluster
	I1029 09:13:02.653807  163762 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1029 09:13:02.763330  163762 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1029 09:13:02.763377  163762 cache.go:59] Caching tarball of preloaded images
	I1029 09:13:02.763598  163762 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1029 09:13:02.765034  163762 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1029 09:13:02.765991  163762 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1029 09:13:02.885363  163762 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1029 09:13:02.885431  163762 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21800-137232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1029 09:13:13.815181  163762 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1029 09:13:13.815382  163762 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/config.json ...
	I1029 09:13:13.815669  163762 start.go:360] acquireMachinesLock for test-preload-740345: {Name:mkcf4e1d7f2bf8251db3d5b4273e9a32697d7a63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1029 09:13:13.815753  163762 start.go:364] duration metric: took 56.893µs to acquireMachinesLock for "test-preload-740345"
	I1029 09:13:13.815771  163762 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:13:13.815777  163762 fix.go:54] fixHost starting: 
	I1029 09:13:13.818101  163762 fix.go:112] recreateIfNeeded on test-preload-740345: state=Stopped err=<nil>
	W1029 09:13:13.818130  163762 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 09:13:13.819813  163762 out.go:252] * Restarting existing kvm2 VM for "test-preload-740345" ...
	I1029 09:13:13.819891  163762 main.go:143] libmachine: starting domain...
	I1029 09:13:13.819907  163762 main.go:143] libmachine: ensuring networks are active...
	I1029 09:13:13.820749  163762 main.go:143] libmachine: Ensuring network default is active
	I1029 09:13:13.821119  163762 main.go:143] libmachine: Ensuring network mk-test-preload-740345 is active
	I1029 09:13:13.821535  163762 main.go:143] libmachine: getting domain XML...
	I1029 09:13:13.822728  163762 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-740345</name>
	  <uuid>5ee02003-a92f-4f07-91e6-1da45ae7decf</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21800-137232/.minikube/machines/test-preload-740345/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21800-137232/.minikube/machines/test-preload-740345/test-preload-740345.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f1:93:56'/>
	      <source network='mk-test-preload-740345'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:ce:31:97'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1029 09:13:15.099109  163762 main.go:143] libmachine: waiting for domain to start...
	I1029 09:13:15.100681  163762 main.go:143] libmachine: domain is now running
	I1029 09:13:15.100704  163762 main.go:143] libmachine: waiting for IP...
	I1029 09:13:15.101710  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:15.102227  163762 main.go:143] libmachine: domain test-preload-740345 has current primary IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:15.102242  163762 main.go:143] libmachine: found domain IP: 192.168.39.180
	I1029 09:13:15.102248  163762 main.go:143] libmachine: reserving static IP address...
	I1029 09:13:15.102709  163762 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-740345", mac: "52:54:00:f1:93:56", ip: "192.168.39.180"} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:11:50 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:15.102743  163762 main.go:143] libmachine: skip adding static IP to network mk-test-preload-740345 - found existing host DHCP lease matching {name: "test-preload-740345", mac: "52:54:00:f1:93:56", ip: "192.168.39.180"}
	I1029 09:13:15.102758  163762 main.go:143] libmachine: reserved static IP address 192.168.39.180 for domain test-preload-740345
	I1029 09:13:15.102772  163762 main.go:143] libmachine: waiting for SSH...
	I1029 09:13:15.102782  163762 main.go:143] libmachine: Getting to WaitForSSH function...
	I1029 09:13:15.105201  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:15.105596  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:11:50 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:15.105619  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:15.105798  163762 main.go:143] libmachine: Using SSH client type: native
	I1029 09:13:15.106066  163762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1029 09:13:15.106080  163762 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1029 09:13:18.196724  163762 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.180:22: connect: no route to host
	I1029 09:13:24.276741  163762 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.180:22: connect: no route to host
	I1029 09:13:27.392904  163762 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:13:27.396703  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:27.397169  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:27.397196  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:27.397523  163762 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/config.json ...
	I1029 09:13:27.397732  163762 machine.go:94] provisionDockerMachine start ...
	I1029 09:13:27.400217  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:27.400653  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:27.400694  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:27.400917  163762 main.go:143] libmachine: Using SSH client type: native
	I1029 09:13:27.401146  163762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1029 09:13:27.401158  163762 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:13:27.516280  163762 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1029 09:13:27.516317  163762 buildroot.go:166] provisioning hostname "test-preload-740345"
	I1029 09:13:27.519314  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:27.519747  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:27.519772  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:27.519933  163762 main.go:143] libmachine: Using SSH client type: native
	I1029 09:13:27.520136  163762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1029 09:13:27.520147  163762 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-740345 && echo "test-preload-740345" | sudo tee /etc/hostname
	I1029 09:13:27.650487  163762 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-740345
	
	I1029 09:13:27.653490  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:27.653915  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:27.653939  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:27.654122  163762 main.go:143] libmachine: Using SSH client type: native
	I1029 09:13:27.654348  163762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1029 09:13:27.654370  163762 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-740345' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-740345/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-740345' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:13:27.779505  163762 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:13:27.779554  163762 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21800-137232/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-137232/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-137232/.minikube}
	I1029 09:13:27.779610  163762 buildroot.go:174] setting up certificates
	I1029 09:13:27.779626  163762 provision.go:84] configureAuth start
	I1029 09:13:27.782843  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:27.783281  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:27.783307  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:27.785668  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:27.786146  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:27.786503  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:27.787386  163762 provision.go:143] copyHostCerts
	I1029 09:13:27.787734  163762 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-137232/.minikube/ca.pem, removing ...
	I1029 09:13:27.787750  163762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-137232/.minikube/ca.pem
	I1029 09:13:27.787832  163762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-137232/.minikube/ca.pem (1082 bytes)
	I1029 09:13:27.787989  163762 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-137232/.minikube/cert.pem, removing ...
	I1029 09:13:27.788002  163762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-137232/.minikube/cert.pem
	I1029 09:13:27.788049  163762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-137232/.minikube/cert.pem (1123 bytes)
	I1029 09:13:27.788141  163762 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-137232/.minikube/key.pem, removing ...
	I1029 09:13:27.788151  163762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-137232/.minikube/key.pem
	I1029 09:13:27.788191  163762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-137232/.minikube/key.pem (1675 bytes)
	I1029 09:13:27.788285  163762 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-137232/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca-key.pem org=jenkins.test-preload-740345 san=[127.0.0.1 192.168.39.180 localhost minikube test-preload-740345]
	I1029 09:13:27.937632  163762 provision.go:177] copyRemoteCerts
	I1029 09:13:27.937715  163762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:13:27.940381  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:27.940843  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:27.940878  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:27.941035  163762 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/test-preload-740345/id_rsa Username:docker}
	I1029 09:13:28.031247  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:13:28.059757  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1029 09:13:28.088102  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1029 09:13:28.115955  163762 provision.go:87] duration metric: took 336.307818ms to configureAuth
	I1029 09:13:28.115998  163762 buildroot.go:189] setting minikube options for container-runtime
	I1029 09:13:28.116190  163762 config.go:182] Loaded profile config "test-preload-740345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1029 09:13:28.119149  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:28.119598  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:28.119630  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:28.119813  163762 main.go:143] libmachine: Using SSH client type: native
	I1029 09:13:28.120022  163762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1029 09:13:28.120043  163762 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:13:28.371916  163762 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:13:28.371954  163762 machine.go:97] duration metric: took 974.20591ms to provisionDockerMachine
	I1029 09:13:28.371971  163762 start.go:293] postStartSetup for "test-preload-740345" (driver="kvm2")
	I1029 09:13:28.371988  163762 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:13:28.372081  163762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:13:28.375030  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:28.375500  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:28.375544  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:28.375718  163762 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/test-preload-740345/id_rsa Username:docker}
	I1029 09:13:28.464234  163762 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:13:28.469076  163762 info.go:137] Remote host: Buildroot 2025.02
	I1029 09:13:28.469115  163762 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-137232/.minikube/addons for local assets ...
	I1029 09:13:28.469210  163762 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-137232/.minikube/files for local assets ...
	I1029 09:13:28.469337  163762 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-137232/.minikube/files/etc/ssl/certs/1412312.pem -> 1412312.pem in /etc/ssl/certs
	I1029 09:13:28.469498  163762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:13:28.481056  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/files/etc/ssl/certs/1412312.pem --> /etc/ssl/certs/1412312.pem (1708 bytes)
	I1029 09:13:28.510626  163762 start.go:296] duration metric: took 138.635618ms for postStartSetup
	I1029 09:13:28.510688  163762 fix.go:56] duration metric: took 14.694901794s for fixHost
	I1029 09:13:28.513680  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:28.514111  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:28.514139  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:28.514354  163762 main.go:143] libmachine: Using SSH client type: native
	I1029 09:13:28.514604  163762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1029 09:13:28.514618  163762 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1029 09:13:28.629628  163762 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761729208.588192246
	
	I1029 09:13:28.629662  163762 fix.go:216] guest clock: 1761729208.588192246
	I1029 09:13:28.629675  163762 fix.go:229] Guest: 2025-10-29 09:13:28.588192246 +0000 UTC Remote: 2025-10-29 09:13:28.510693796 +0000 UTC m=+25.962850769 (delta=77.49845ms)
	I1029 09:13:28.629698  163762 fix.go:200] guest clock delta is within tolerance: 77.49845ms
	I1029 09:13:28.629710  163762 start.go:83] releasing machines lock for "test-preload-740345", held for 14.813943735s
	I1029 09:13:28.632618  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:28.632981  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:28.633009  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:28.633554  163762 ssh_runner.go:195] Run: cat /version.json
	I1029 09:13:28.633578  163762 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:13:28.636358  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:28.636560  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:28.636793  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:28.636817  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:28.636980  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:28.637013  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:28.636981  163762 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/test-preload-740345/id_rsa Username:docker}
	I1029 09:13:28.637174  163762 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/test-preload-740345/id_rsa Username:docker}
	I1029 09:13:28.744499  163762 ssh_runner.go:195] Run: systemctl --version
	I1029 09:13:28.750641  163762 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:13:28.900095  163762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:13:28.907490  163762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:13:28.907570  163762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:13:28.930777  163762 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1029 09:13:28.930811  163762 start.go:496] detecting cgroup driver to use...
	I1029 09:13:28.930896  163762 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:13:28.955564  163762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:13:28.974453  163762 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:13:28.974525  163762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:13:28.992302  163762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:13:29.008785  163762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:13:29.153668  163762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:13:29.377531  163762 docker.go:234] disabling docker service ...
	I1029 09:13:29.377624  163762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:13:29.394748  163762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:13:29.409695  163762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:13:29.556905  163762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:13:29.692627  163762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:13:29.709896  163762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:13:29.735099  163762 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1029 09:13:29.735171  163762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:13:29.748732  163762 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:13:29.748804  163762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:13:29.761311  163762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:13:29.773284  163762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:13:29.785497  163762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:13:29.798346  163762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:13:29.810344  163762 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:13:29.830959  163762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:13:29.844043  163762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:13:29.854629  163762 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1029 09:13:29.854723  163762 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1029 09:13:29.875256  163762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:13:29.887204  163762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:13:30.024808  163762 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:13:30.144732  163762 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:13:30.144823  163762 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:13:30.150100  163762 start.go:564] Will wait 60s for crictl version
	I1029 09:13:30.150168  163762 ssh_runner.go:195] Run: which crictl
	I1029 09:13:30.153973  163762 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1029 09:13:30.191666  163762 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1029 09:13:30.191787  163762 ssh_runner.go:195] Run: crio --version
	I1029 09:13:30.220859  163762 ssh_runner.go:195] Run: crio --version
	I1029 09:13:30.250569  163762 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1029 09:13:30.254235  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:30.254665  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:30.254691  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:30.254889  163762 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1029 09:13:30.259484  163762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:13:30.274506  163762 kubeadm.go:884] updating cluster {Name:test-preload-740345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-740345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:13:30.274624  163762 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1029 09:13:30.274671  163762 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:13:30.314299  163762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1029 09:13:30.314373  163762 ssh_runner.go:195] Run: which lz4
	I1029 09:13:30.318375  163762 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1029 09:13:30.322809  163762 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1029 09:13:30.322843  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1029 09:13:31.752968  163762 crio.go:462] duration metric: took 1.434634636s to copy over tarball
	I1029 09:13:31.753052  163762 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1029 09:13:33.467476  163762 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.714388089s)
	I1029 09:13:33.467524  163762 crio.go:469] duration metric: took 1.714523363s to extract the tarball
	I1029 09:13:33.467546  163762 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1029 09:13:33.507647  163762 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:13:33.553217  163762 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:13:33.553246  163762 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:13:33.553256  163762 kubeadm.go:935] updating node { 192.168.39.180 8443 v1.32.0 crio true true} ...
	I1029 09:13:33.553363  163762 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-740345 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-740345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:13:33.553450  163762 ssh_runner.go:195] Run: crio config
	I1029 09:13:33.599145  163762 cni.go:84] Creating CNI manager for ""
	I1029 09:13:33.599174  163762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1029 09:13:33.599204  163762 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:13:33.599235  163762 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-740345 NodeName:test-preload-740345 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:13:33.599364  163762 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-740345"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.180"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:13:33.599481  163762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1029 09:13:33.611895  163762 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:13:33.611995  163762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:13:33.623949  163762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1029 09:13:33.645376  163762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:13:33.665798  163762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1029 09:13:33.687129  163762 ssh_runner.go:195] Run: grep 192.168.39.180	control-plane.minikube.internal$ /etc/hosts
	I1029 09:13:33.691344  163762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1029 09:13:33.706170  163762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:13:33.843074  163762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:13:33.864189  163762 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345 for IP: 192.168.39.180
	I1029 09:13:33.864247  163762 certs.go:195] generating shared ca certs ...
	I1029 09:13:33.864270  163762 certs.go:227] acquiring lock for ca certs: {Name:mk7a2a9c7bc52f8ce34b75ca46a18294b750be87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:13:33.864566  163762 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-137232/.minikube/ca.key
	I1029 09:13:33.864696  163762 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.key
	I1029 09:13:33.864717  163762 certs.go:257] generating profile certs ...
	I1029 09:13:33.864856  163762 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/client.key
	I1029 09:13:33.864944  163762 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/apiserver.key.336e99e6
	I1029 09:13:33.865003  163762 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/proxy-client.key
	I1029 09:13:33.865210  163762 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/141231.pem (1338 bytes)
	W1029 09:13:33.865267  163762 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-137232/.minikube/certs/141231_empty.pem, impossibly tiny 0 bytes
	I1029 09:13:33.865285  163762 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:13:33.865335  163762 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:13:33.865383  163762 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:13:33.865444  163762 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/key.pem (1675 bytes)
	I1029 09:13:33.865521  163762 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/files/etc/ssl/certs/1412312.pem (1708 bytes)
	I1029 09:13:33.866316  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:13:33.918687  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1029 09:13:33.959726  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:13:33.990589  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:13:34.021425  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1029 09:13:34.052592  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:13:34.082836  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:13:34.113369  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1029 09:13:34.143251  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:13:34.173212  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/certs/141231.pem --> /usr/share/ca-certificates/141231.pem (1338 bytes)
	I1029 09:13:34.203089  163762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/files/etc/ssl/certs/1412312.pem --> /usr/share/ca-certificates/1412312.pem (1708 bytes)
	I1029 09:13:34.238380  163762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:13:34.261146  163762 ssh_runner.go:195] Run: openssl version
	I1029 09:13:34.267634  163762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141231.pem && ln -fs /usr/share/ca-certificates/141231.pem /etc/ssl/certs/141231.pem"
	I1029 09:13:34.281273  163762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141231.pem
	I1029 09:13:34.286509  163762 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:30 /usr/share/ca-certificates/141231.pem
	I1029 09:13:34.286577  163762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141231.pem
	I1029 09:13:34.293917  163762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141231.pem /etc/ssl/certs/51391683.0"
	I1029 09:13:34.307126  163762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1412312.pem && ln -fs /usr/share/ca-certificates/1412312.pem /etc/ssl/certs/1412312.pem"
	I1029 09:13:34.320049  163762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1412312.pem
	I1029 09:13:34.325569  163762 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:30 /usr/share/ca-certificates/1412312.pem
	I1029 09:13:34.325621  163762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1412312.pem
	I1029 09:13:34.333846  163762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1412312.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:13:34.347451  163762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:13:34.361449  163762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:13:34.366987  163762 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:13:34.367056  163762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:13:34.374324  163762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:13:34.387473  163762 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:13:34.392839  163762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:13:34.400314  163762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:13:34.407821  163762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:13:34.415378  163762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:13:34.422763  163762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:13:34.430395  163762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:13:34.438173  163762 kubeadm.go:401] StartCluster: {Name:test-preload-740345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-740345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:13:34.438279  163762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:13:34.438334  163762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:13:34.477758  163762 cri.go:89] found id: ""
	I1029 09:13:34.477838  163762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1029 09:13:34.490112  163762 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1029 09:13:34.490136  163762 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1029 09:13:34.490184  163762 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1029 09:13:34.502291  163762 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:13:34.502751  163762 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-740345" does not appear in /home/jenkins/minikube-integration/21800-137232/kubeconfig
	I1029 09:13:34.502874  163762 kubeconfig.go:62] /home/jenkins/minikube-integration/21800-137232/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-740345" cluster setting kubeconfig missing "test-preload-740345" context setting]
	I1029 09:13:34.503191  163762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/kubeconfig: {Name:mk5d77803dd54d458a7a9c3d32d70e7b02c64781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:13:34.503719  163762 kapi.go:59] client config for test-preload-740345: &rest.Config{Host:"https://192.168.39.180:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/client.key", CAFile:"/home/jenkins/minikube-integration/21800-137232/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 09:13:34.504129  163762 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1029 09:13:34.504145  163762 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1029 09:13:34.504151  163762 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1029 09:13:34.504155  163762 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1029 09:13:34.504159  163762 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1029 09:13:34.504552  163762 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1029 09:13:34.515835  163762 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.180
	I1029 09:13:34.515868  163762 kubeadm.go:1161] stopping kube-system containers ...
	I1029 09:13:34.515882  163762 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1029 09:13:34.515934  163762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:13:34.556087  163762 cri.go:89] found id: ""
	I1029 09:13:34.556156  163762 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1029 09:13:34.579822  163762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1029 09:13:34.592261  163762 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1029 09:13:34.592282  163762 kubeadm.go:158] found existing configuration files:
	
	I1029 09:13:34.592331  163762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1029 09:13:34.603212  163762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1029 09:13:34.603304  163762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1029 09:13:34.614899  163762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1029 09:13:34.625928  163762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1029 09:13:34.625990  163762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1029 09:13:34.638268  163762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1029 09:13:34.649748  163762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1029 09:13:34.649822  163762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1029 09:13:34.661937  163762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1029 09:13:34.673260  163762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1029 09:13:34.673348  163762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1029 09:13:34.685228  163762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1029 09:13:34.697235  163762 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1029 09:13:34.753569  163762 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1029 09:13:35.546957  163762 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1029 09:13:35.784481  163762 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1029 09:13:35.861340  163762 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1029 09:13:35.941579  163762 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:13:35.941674  163762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:13:36.442727  163762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:13:36.942466  163762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:13:37.442053  163762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:13:37.942234  163762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:13:38.442599  163762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:13:38.472657  163762 api_server.go:72] duration metric: took 2.531078733s to wait for apiserver process to appear ...
	I1029 09:13:38.472697  163762 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:13:38.472729  163762 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I1029 09:13:40.896425  163762 api_server.go:279] https://192.168.39.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1029 09:13:40.896457  163762 api_server.go:103] status: https://192.168.39.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1029 09:13:40.896473  163762 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I1029 09:13:40.988524  163762 api_server.go:279] https://192.168.39.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:13:40.988556  163762 api_server.go:103] status: https://192.168.39.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:13:40.988573  163762 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I1029 09:13:40.999875  163762 api_server.go:279] https://192.168.39.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:13:40.999910  163762 api_server.go:103] status: https://192.168.39.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:13:41.473725  163762 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I1029 09:13:41.478376  163762 api_server.go:279] https://192.168.39.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:13:41.478401  163762 api_server.go:103] status: https://192.168.39.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:13:41.973044  163762 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I1029 09:13:41.979837  163762 api_server.go:279] https://192.168.39.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1029 09:13:41.979871  163762 api_server.go:103] status: https://192.168.39.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1029 09:13:42.473573  163762 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I1029 09:13:42.478611  163762 api_server.go:279] https://192.168.39.180:8443/healthz returned 200:
	ok
	I1029 09:13:42.485439  163762 api_server.go:141] control plane version: v1.32.0
	I1029 09:13:42.485465  163762 api_server.go:131] duration metric: took 4.012760651s to wait for apiserver health ...
	I1029 09:13:42.485475  163762 cni.go:84] Creating CNI manager for ""
	I1029 09:13:42.485483  163762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1029 09:13:42.486875  163762 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1029 09:13:42.487826  163762 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1029 09:13:42.501695  163762 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1029 09:13:42.527877  163762 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:13:42.532624  163762 system_pods.go:59] 7 kube-system pods found
	I1029 09:13:42.532659  163762 system_pods.go:61] "coredns-668d6bf9bc-7pr7c" [e38b44e5-195c-465c-99a8-b9dd9726567e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1029 09:13:42.532667  163762 system_pods.go:61] "etcd-test-preload-740345" [3e8fb19e-29b3-4630-b75c-8132cb4b08ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:13:42.532675  163762 system_pods.go:61] "kube-apiserver-test-preload-740345" [c0df143d-4371-41fa-bf1b-f7e05e31c78e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:13:42.532682  163762 system_pods.go:61] "kube-controller-manager-test-preload-740345" [f35d3199-595b-432c-a292-794aa02c795e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:13:42.532688  163762 system_pods.go:61] "kube-proxy-z2rqp" [853c4890-c413-40ea-92de-346ded482c87] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1029 09:13:42.532693  163762 system_pods.go:61] "kube-scheduler-test-preload-740345" [ed47172f-119a-49bf-8a8a-a47ac4e39b56] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:13:42.532699  163762 system_pods.go:61] "storage-provisioner" [80de636d-a6df-4725-b56e-a06d6da4f7e4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1029 09:13:42.532705  163762 system_pods.go:74] duration metric: took 4.799735ms to wait for pod list to return data ...
	I1029 09:13:42.532712  163762 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:13:42.537198  163762 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1029 09:13:42.537226  163762 node_conditions.go:123] node cpu capacity is 2
	I1029 09:13:42.537239  163762 node_conditions.go:105] duration metric: took 4.522188ms to run NodePressure ...
	I1029 09:13:42.537288  163762 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1029 09:13:42.807752  163762 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1029 09:13:42.811030  163762 kubeadm.go:744] kubelet initialised
	I1029 09:13:42.811053  163762 kubeadm.go:745] duration metric: took 3.273714ms waiting for restarted kubelet to initialise ...
	I1029 09:13:42.811069  163762 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 09:13:42.825058  163762 ops.go:34] apiserver oom_adj: -16
	I1029 09:13:42.825079  163762 kubeadm.go:602] duration metric: took 8.334936154s to restartPrimaryControlPlane
	I1029 09:13:42.825088  163762 kubeadm.go:403] duration metric: took 8.386928615s to StartCluster
	I1029 09:13:42.825107  163762 settings.go:142] acquiring lock: {Name:mkf57999febc1e58dfdf035d9c465d8b8e2fde1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:13:42.825186  163762 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-137232/kubeconfig
	I1029 09:13:42.825803  163762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/kubeconfig: {Name:mk5d77803dd54d458a7a9c3d32d70e7b02c64781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:13:42.826067  163762 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:13:42.826154  163762 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:13:42.826285  163762 addons.go:70] Setting storage-provisioner=true in profile "test-preload-740345"
	I1029 09:13:42.826299  163762 config.go:182] Loaded profile config "test-preload-740345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1029 09:13:42.826308  163762 addons.go:239] Setting addon storage-provisioner=true in "test-preload-740345"
	W1029 09:13:42.826318  163762 addons.go:248] addon storage-provisioner should already be in state true
	I1029 09:13:42.826328  163762 addons.go:70] Setting default-storageclass=true in profile "test-preload-740345"
	I1029 09:13:42.826350  163762 host.go:66] Checking if "test-preload-740345" exists ...
	I1029 09:13:42.826359  163762 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-740345"
	I1029 09:13:42.827287  163762 out.go:179] * Verifying Kubernetes components...
	I1029 09:13:42.828295  163762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:13:42.828355  163762 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1029 09:13:42.828697  163762 kapi.go:59] client config for test-preload-740345: &rest.Config{Host:"https://192.168.39.180:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/client.key", CAFile:"/home/jenkins/minikube-integration/21800-137232/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 09:13:42.828942  163762 addons.go:239] Setting addon default-storageclass=true in "test-preload-740345"
	W1029 09:13:42.828954  163762 addons.go:248] addon default-storageclass should already be in state true
	I1029 09:13:42.828972  163762 host.go:66] Checking if "test-preload-740345" exists ...
	I1029 09:13:42.829484  163762 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:13:42.829505  163762 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1029 09:13:42.830479  163762 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1029 09:13:42.830493  163762 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1029 09:13:42.832430  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:42.832836  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:42.832876  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:42.833066  163762 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/test-preload-740345/id_rsa Username:docker}
	I1029 09:13:42.833083  163762 main.go:143] libmachine: domain test-preload-740345 has defined MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:42.833399  163762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:56", ip: ""} in network mk-test-preload-740345: {Iface:virbr1 ExpiryTime:2025-10-29 10:13:25 +0000 UTC Type:0 Mac:52:54:00:f1:93:56 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:test-preload-740345 Clientid:01:52:54:00:f1:93:56}
	I1029 09:13:42.833431  163762 main.go:143] libmachine: domain test-preload-740345 has defined IP address 192.168.39.180 and MAC address 52:54:00:f1:93:56 in network mk-test-preload-740345
	I1029 09:13:42.833570  163762 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/test-preload-740345/id_rsa Username:docker}
	I1029 09:13:43.029678  163762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:13:43.059142  163762 node_ready.go:35] waiting up to 6m0s for node "test-preload-740345" to be "Ready" ...
	I1029 09:13:43.212582  163762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1029 09:13:43.231834  163762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1029 09:13:43.880446  163762 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1029 09:13:43.881379  163762 addons.go:515] duration metric: took 1.055240362s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1029 09:13:45.061967  163762 node_ready.go:57] node "test-preload-740345" has "Ready":"False" status (will retry)
	W1029 09:13:47.063214  163762 node_ready.go:57] node "test-preload-740345" has "Ready":"False" status (will retry)
	W1029 09:13:49.563286  163762 node_ready.go:57] node "test-preload-740345" has "Ready":"False" status (will retry)
	I1029 09:13:51.562537  163762 node_ready.go:49] node "test-preload-740345" is "Ready"
	I1029 09:13:51.562583  163762 node_ready.go:38] duration metric: took 8.503364507s for node "test-preload-740345" to be "Ready" ...
	I1029 09:13:51.562602  163762 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:13:51.562661  163762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:13:51.582302  163762 api_server.go:72] duration metric: took 8.756196365s to wait for apiserver process to appear ...
	I1029 09:13:51.582340  163762 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:13:51.582364  163762 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I1029 09:13:51.587627  163762 api_server.go:279] https://192.168.39.180:8443/healthz returned 200:
	ok
	I1029 09:13:51.588824  163762 api_server.go:141] control plane version: v1.32.0
	I1029 09:13:51.588852  163762 api_server.go:131] duration metric: took 6.504153ms to wait for apiserver health ...
	I1029 09:13:51.588864  163762 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:13:51.592702  163762 system_pods.go:59] 7 kube-system pods found
	I1029 09:13:51.592729  163762 system_pods.go:61] "coredns-668d6bf9bc-7pr7c" [e38b44e5-195c-465c-99a8-b9dd9726567e] Running
	I1029 09:13:51.592737  163762 system_pods.go:61] "etcd-test-preload-740345" [3e8fb19e-29b3-4630-b75c-8132cb4b08ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:13:51.592744  163762 system_pods.go:61] "kube-apiserver-test-preload-740345" [c0df143d-4371-41fa-bf1b-f7e05e31c78e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:13:51.592751  163762 system_pods.go:61] "kube-controller-manager-test-preload-740345" [f35d3199-595b-432c-a292-794aa02c795e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:13:51.592755  163762 system_pods.go:61] "kube-proxy-z2rqp" [853c4890-c413-40ea-92de-346ded482c87] Running
	I1029 09:13:51.592766  163762 system_pods.go:61] "kube-scheduler-test-preload-740345" [ed47172f-119a-49bf-8a8a-a47ac4e39b56] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:13:51.592773  163762 system_pods.go:61] "storage-provisioner" [80de636d-a6df-4725-b56e-a06d6da4f7e4] Running
	I1029 09:13:51.592779  163762 system_pods.go:74] duration metric: took 3.909338ms to wait for pod list to return data ...
	I1029 09:13:51.592791  163762 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:13:51.595344  163762 default_sa.go:45] found service account: "default"
	I1029 09:13:51.595373  163762 default_sa.go:55] duration metric: took 2.570514ms for default service account to be created ...
	I1029 09:13:51.595384  163762 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:13:51.597760  163762 system_pods.go:86] 7 kube-system pods found
	I1029 09:13:51.597796  163762 system_pods.go:89] "coredns-668d6bf9bc-7pr7c" [e38b44e5-195c-465c-99a8-b9dd9726567e] Running
	I1029 09:13:51.597807  163762 system_pods.go:89] "etcd-test-preload-740345" [3e8fb19e-29b3-4630-b75c-8132cb4b08ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:13:51.597817  163762 system_pods.go:89] "kube-apiserver-test-preload-740345" [c0df143d-4371-41fa-bf1b-f7e05e31c78e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:13:51.597830  163762 system_pods.go:89] "kube-controller-manager-test-preload-740345" [f35d3199-595b-432c-a292-794aa02c795e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:13:51.597836  163762 system_pods.go:89] "kube-proxy-z2rqp" [853c4890-c413-40ea-92de-346ded482c87] Running
	I1029 09:13:51.597844  163762 system_pods.go:89] "kube-scheduler-test-preload-740345" [ed47172f-119a-49bf-8a8a-a47ac4e39b56] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:13:51.597849  163762 system_pods.go:89] "storage-provisioner" [80de636d-a6df-4725-b56e-a06d6da4f7e4] Running
	I1029 09:13:51.597859  163762 system_pods.go:126] duration metric: took 2.468081ms to wait for k8s-apps to be running ...
	I1029 09:13:51.597870  163762 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:13:51.597931  163762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:13:51.613815  163762 system_svc.go:56] duration metric: took 15.931378ms WaitForService to wait for kubelet
	I1029 09:13:51.613852  163762 kubeadm.go:587] duration metric: took 8.787754821s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:13:51.613872  163762 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:13:51.616545  163762 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1029 09:13:51.616569  163762 node_conditions.go:123] node cpu capacity is 2
	I1029 09:13:51.616580  163762 node_conditions.go:105] duration metric: took 2.703052ms to run NodePressure ...
	I1029 09:13:51.616592  163762 start.go:242] waiting for startup goroutines ...
	I1029 09:13:51.616599  163762 start.go:247] waiting for cluster config update ...
	I1029 09:13:51.616610  163762 start.go:256] writing updated cluster config ...
	I1029 09:13:51.616886  163762 ssh_runner.go:195] Run: rm -f paused
	I1029 09:13:51.621954  163762 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:13:51.622475  163762 kapi.go:59] client config for test-preload-740345: &rest.Config{Host:"https://192.168.39.180:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-137232/.minikube/profiles/test-preload-740345/client.key", CAFile:"/home/jenkins/minikube-integration/21800-137232/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 09:13:51.625774  163762 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-7pr7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:13:51.629861  163762 pod_ready.go:94] pod "coredns-668d6bf9bc-7pr7c" is "Ready"
	I1029 09:13:51.629880  163762 pod_ready.go:86] duration metric: took 4.086241ms for pod "coredns-668d6bf9bc-7pr7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:13:51.631506  163762 pod_ready.go:83] waiting for pod "etcd-test-preload-740345" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:13:53.637339  163762 pod_ready.go:104] pod "etcd-test-preload-740345" is not "Ready", error: <nil>
	I1029 09:13:54.637971  163762 pod_ready.go:94] pod "etcd-test-preload-740345" is "Ready"
	I1029 09:13:54.638006  163762 pod_ready.go:86] duration metric: took 3.006480331s for pod "etcd-test-preload-740345" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:13:54.640390  163762 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-740345" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:13:54.644805  163762 pod_ready.go:94] pod "kube-apiserver-test-preload-740345" is "Ready"
	I1029 09:13:54.644831  163762 pod_ready.go:86] duration metric: took 4.411642ms for pod "kube-apiserver-test-preload-740345" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:13:54.646585  163762 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-740345" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:13:54.651904  163762 pod_ready.go:94] pod "kube-controller-manager-test-preload-740345" is "Ready"
	I1029 09:13:54.651925  163762 pod_ready.go:86] duration metric: took 5.321235ms for pod "kube-controller-manager-test-preload-740345" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:13:54.654128  163762 pod_ready.go:83] waiting for pod "kube-proxy-z2rqp" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:13:55.025670  163762 pod_ready.go:94] pod "kube-proxy-z2rqp" is "Ready"
	I1029 09:13:55.025710  163762 pod_ready.go:86] duration metric: took 371.561024ms for pod "kube-proxy-z2rqp" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:13:55.226005  163762 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-740345" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:13:55.626557  163762 pod_ready.go:94] pod "kube-scheduler-test-preload-740345" is "Ready"
	I1029 09:13:55.626588  163762 pod_ready.go:86] duration metric: took 400.545931ms for pod "kube-scheduler-test-preload-740345" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:13:55.626605  163762 pod_ready.go:40] duration metric: took 4.004616873s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:13:55.672782  163762 start.go:628] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1029 09:13:55.674250  163762 out.go:203] 
	W1029 09:13:55.675120  163762 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1029 09:13:55.675932  163762 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1029 09:13:55.677044  163762 out.go:179] * Done! kubectl is now configured to use "test-preload-740345" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.505405969Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761729236505385660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c65a7c3-14bf-48ed-8894-94457dd13ce3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.505973170Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11236d88-8d33-4acc-91a5-280101825434 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.506047060Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11236d88-8d33-4acc-91a5-280101825434 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.506208655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52f814909737ca7ec65d88d0458b803b7032d4b1159b582bb42c955c253fafdd,PodSandboxId:381d67ff383bae92b0b90029aab7f2d7d91e84fe32ff3187d737ac66cc5a69e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761729229893492825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7pr7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38b44e5-195c-465c-99a8-b9dd9726567e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a5430829926dd545d52c4c084ca5370cc1ea77f4f488500bc4f9e407fc7ba0,PodSandboxId:8aa2954bf1515c7957c2251786390da5a9b3a43de533f7259555b87b9c5705eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761729222301841109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2rqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 853c4890-c413-40ea-92de-346ded482c87,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122616bb256c09bde83e8cd75bac29a870c2e04d75d57f70e3165b077c8eeea1,PodSandboxId:6ef6d6935bac9f64af920977a8335345bd1e5287fdd763ee8b2d7703c469b267,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761729222278112593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80
de636d-a6df-4725-b56e-a06d6da4f7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f702cca5dfb5498206d6b6325476266a31f162beb8766487e833eb399303306,PodSandboxId:19098e0976d891dc01b8e15e59ce22baaae62a24b68592bd98e4e39b8ddba14f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761729218104680559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1320033b83128e79d737d3c29d16c601,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d86974aa1b9fcc1278d73f53dde3e5e8703c805dc419e46b1581ee66899fdb,PodSandboxId:b6446fff55e1d4372fffdfcffe35e176cf5365664a1a7a88a535dc8134a0a25a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761729218063337086,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d42426f91f9e774659c8260
bd7fec08f,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed13601a59afc5bc27985b34ca61717e6e670f864a5bd8da3e2ea8824397655,PodSandboxId:1efb1a6d984a087c6e0c8b4ba2a6b1a530d2da9e5d2a3fc7e7c7efcd63692ad1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761729218094682335,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad16a7ae9741268201f907520e85bd84,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1808a8fcb2ac8940e5f31324eed42305d68618a4a9d38a94f276f79a6bdb76e4,PodSandboxId:c7f40461ac0203b91a7e232548986b50b37ca6708b9124cefcb19c300e89018e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761729218048592612,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f71f879735962cec69785e82bbdb445c,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11236d88-8d33-4acc-91a5-280101825434 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.542904316Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6c36fc1-189a-4b86-b32d-23f1bfbe0c4c name=/runtime.v1.RuntimeService/Version
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.542995545Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6c36fc1-189a-4b86-b32d-23f1bfbe0c4c name=/runtime.v1.RuntimeService/Version
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.545064097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c798c887-7037-4709-8f2c-24714401a607 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.545875189Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761729236545729814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c798c887-7037-4709-8f2c-24714401a607 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.546621664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4ef92e7-2186-4fdf-89dc-6917a472d1bf name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.546838274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4ef92e7-2186-4fdf-89dc-6917a472d1bf name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.547005689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52f814909737ca7ec65d88d0458b803b7032d4b1159b582bb42c955c253fafdd,PodSandboxId:381d67ff383bae92b0b90029aab7f2d7d91e84fe32ff3187d737ac66cc5a69e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761729229893492825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7pr7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38b44e5-195c-465c-99a8-b9dd9726567e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a5430829926dd545d52c4c084ca5370cc1ea77f4f488500bc4f9e407fc7ba0,PodSandboxId:8aa2954bf1515c7957c2251786390da5a9b3a43de533f7259555b87b9c5705eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761729222301841109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2rqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 853c4890-c413-40ea-92de-346ded482c87,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122616bb256c09bde83e8cd75bac29a870c2e04d75d57f70e3165b077c8eeea1,PodSandboxId:6ef6d6935bac9f64af920977a8335345bd1e5287fdd763ee8b2d7703c469b267,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761729222278112593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80
de636d-a6df-4725-b56e-a06d6da4f7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f702cca5dfb5498206d6b6325476266a31f162beb8766487e833eb399303306,PodSandboxId:19098e0976d891dc01b8e15e59ce22baaae62a24b68592bd98e4e39b8ddba14f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761729218104680559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1320033b83128e79d737d3c29d16c601,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d86974aa1b9fcc1278d73f53dde3e5e8703c805dc419e46b1581ee66899fdb,PodSandboxId:b6446fff55e1d4372fffdfcffe35e176cf5365664a1a7a88a535dc8134a0a25a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761729218063337086,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d42426f91f9e774659c8260
bd7fec08f,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed13601a59afc5bc27985b34ca61717e6e670f864a5bd8da3e2ea8824397655,PodSandboxId:1efb1a6d984a087c6e0c8b4ba2a6b1a530d2da9e5d2a3fc7e7c7efcd63692ad1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761729218094682335,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad16a7ae9741268201f907520e85bd84,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1808a8fcb2ac8940e5f31324eed42305d68618a4a9d38a94f276f79a6bdb76e4,PodSandboxId:c7f40461ac0203b91a7e232548986b50b37ca6708b9124cefcb19c300e89018e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761729218048592612,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f71f879735962cec69785e82bbdb445c,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4ef92e7-2186-4fdf-89dc-6917a472d1bf name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.587230458Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9f149b2-e606-424a-a863-d1fd5a5467ff name=/runtime.v1.RuntimeService/Version
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.587498886Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9f149b2-e606-424a-a863-d1fd5a5467ff name=/runtime.v1.RuntimeService/Version
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.589071226Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=40088731-fc3e-4e2c-853f-5a589f3c7e15 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.589586153Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761729236589562249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40088731-fc3e-4e2c-853f-5a589f3c7e15 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.590235893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e851028-6dd4-43e1-bf2d-2bd32a89312f name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.590342698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e851028-6dd4-43e1-bf2d-2bd32a89312f name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.590536513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52f814909737ca7ec65d88d0458b803b7032d4b1159b582bb42c955c253fafdd,PodSandboxId:381d67ff383bae92b0b90029aab7f2d7d91e84fe32ff3187d737ac66cc5a69e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761729229893492825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7pr7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38b44e5-195c-465c-99a8-b9dd9726567e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a5430829926dd545d52c4c084ca5370cc1ea77f4f488500bc4f9e407fc7ba0,PodSandboxId:8aa2954bf1515c7957c2251786390da5a9b3a43de533f7259555b87b9c5705eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761729222301841109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2rqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 853c4890-c413-40ea-92de-346ded482c87,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122616bb256c09bde83e8cd75bac29a870c2e04d75d57f70e3165b077c8eeea1,PodSandboxId:6ef6d6935bac9f64af920977a8335345bd1e5287fdd763ee8b2d7703c469b267,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761729222278112593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80
de636d-a6df-4725-b56e-a06d6da4f7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f702cca5dfb5498206d6b6325476266a31f162beb8766487e833eb399303306,PodSandboxId:19098e0976d891dc01b8e15e59ce22baaae62a24b68592bd98e4e39b8ddba14f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761729218104680559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1320033b83128e79d737d3c29d16c601,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d86974aa1b9fcc1278d73f53dde3e5e8703c805dc419e46b1581ee66899fdb,PodSandboxId:b6446fff55e1d4372fffdfcffe35e176cf5365664a1a7a88a535dc8134a0a25a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761729218063337086,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d42426f91f9e774659c8260
bd7fec08f,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed13601a59afc5bc27985b34ca61717e6e670f864a5bd8da3e2ea8824397655,PodSandboxId:1efb1a6d984a087c6e0c8b4ba2a6b1a530d2da9e5d2a3fc7e7c7efcd63692ad1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761729218094682335,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad16a7ae9741268201f907520e85bd84,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1808a8fcb2ac8940e5f31324eed42305d68618a4a9d38a94f276f79a6bdb76e4,PodSandboxId:c7f40461ac0203b91a7e232548986b50b37ca6708b9124cefcb19c300e89018e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761729218048592612,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f71f879735962cec69785e82bbdb445c,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e851028-6dd4-43e1-bf2d-2bd32a89312f name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.626091458Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38a73c7b-2c6c-4aa5-8921-e5d3f806deca name=/runtime.v1.RuntimeService/Version
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.626356552Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38a73c7b-2c6c-4aa5-8921-e5d3f806deca name=/runtime.v1.RuntimeService/Version
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.627670550Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24ff12b5-b48b-426c-bc00-2eed4f920071 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.628541437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761729236628510833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24ff12b5-b48b-426c-bc00-2eed4f920071 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.629030676Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a30a5df-7c31-40df-96f9-abac90e7c72f name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.629269526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a30a5df-7c31-40df-96f9-abac90e7c72f name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:13:56 test-preload-740345 crio[826]: time="2025-10-29 09:13:56.629596845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52f814909737ca7ec65d88d0458b803b7032d4b1159b582bb42c955c253fafdd,PodSandboxId:381d67ff383bae92b0b90029aab7f2d7d91e84fe32ff3187d737ac66cc5a69e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761729229893492825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7pr7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38b44e5-195c-465c-99a8-b9dd9726567e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a5430829926dd545d52c4c084ca5370cc1ea77f4f488500bc4f9e407fc7ba0,PodSandboxId:8aa2954bf1515c7957c2251786390da5a9b3a43de533f7259555b87b9c5705eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761729222301841109,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2rqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 853c4890-c413-40ea-92de-346ded482c87,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122616bb256c09bde83e8cd75bac29a870c2e04d75d57f70e3165b077c8eeea1,PodSandboxId:6ef6d6935bac9f64af920977a8335345bd1e5287fdd763ee8b2d7703c469b267,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761729222278112593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80
de636d-a6df-4725-b56e-a06d6da4f7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f702cca5dfb5498206d6b6325476266a31f162beb8766487e833eb399303306,PodSandboxId:19098e0976d891dc01b8e15e59ce22baaae62a24b68592bd98e4e39b8ddba14f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761729218104680559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1320033b83128e79d737d3c29d16c601,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d86974aa1b9fcc1278d73f53dde3e5e8703c805dc419e46b1581ee66899fdb,PodSandboxId:b6446fff55e1d4372fffdfcffe35e176cf5365664a1a7a88a535dc8134a0a25a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761729218063337086,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d42426f91f9e774659c8260
bd7fec08f,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed13601a59afc5bc27985b34ca61717e6e670f864a5bd8da3e2ea8824397655,PodSandboxId:1efb1a6d984a087c6e0c8b4ba2a6b1a530d2da9e5d2a3fc7e7c7efcd63692ad1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761729218094682335,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad16a7ae9741268201f907520e85bd84,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1808a8fcb2ac8940e5f31324eed42305d68618a4a9d38a94f276f79a6bdb76e4,PodSandboxId:c7f40461ac0203b91a7e232548986b50b37ca6708b9124cefcb19c300e89018e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761729218048592612,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-740345,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f71f879735962cec69785e82bbdb445c,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a30a5df-7c31-40df-96f9-abac90e7c72f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	52f814909737c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   6 seconds ago       Running             coredns                   1                   381d67ff383ba       coredns-668d6bf9bc-7pr7c
	d6a5430829926       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   14 seconds ago      Running             kube-proxy                1                   8aa2954bf1515       kube-proxy-z2rqp
	122616bb256c0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   6ef6d6935bac9       storage-provisioner
	1f702cca5dfb5       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   18 seconds ago      Running             etcd                      1                   19098e0976d89       etcd-test-preload-740345
	8ed13601a59af       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   18 seconds ago      Running             kube-scheduler            1                   1efb1a6d984a0       kube-scheduler-test-preload-740345
	b6d86974aa1b9       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   18 seconds ago      Running             kube-controller-manager   1                   b6446fff55e1d       kube-controller-manager-test-preload-740345
	1808a8fcb2ac8       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   18 seconds ago      Running             kube-apiserver            1                   c7f40461ac020       kube-apiserver-test-preload-740345
	
	
	==> coredns [52f814909737ca7ec65d88d0458b803b7032d4b1159b582bb42c955c253fafdd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34226 - 60206 "HINFO IN 147018334484182487.352749845776957963. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.037319093s
	
	
	==> describe nodes <==
	Name:               test-preload-740345
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-740345
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=test-preload-740345
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_12_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:12:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-740345
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:13:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:13:51 +0000   Wed, 29 Oct 2025 09:12:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:13:51 +0000   Wed, 29 Oct 2025 09:12:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:13:51 +0000   Wed, 29 Oct 2025 09:12:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:13:51 +0000   Wed, 29 Oct 2025 09:13:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.180
	  Hostname:    test-preload-740345
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ee02003a92f4f0791e61da45ae7decf
	  System UUID:                5ee02003-a92f-4f07-91e6-1da45ae7decf
	  Boot ID:                    652a509d-5be0-4822-82e4-9d554f1dd00b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-7pr7c                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     78s
	  kube-system                 etcd-test-preload-740345                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         83s
	  kube-system                 kube-apiserver-test-preload-740345             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-test-preload-740345    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-z2rqp                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-test-preload-740345             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 77s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  88s (x8 over 89s)  kubelet          Node test-preload-740345 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s (x8 over 89s)  kubelet          Node test-preload-740345 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s (x7 over 89s)  kubelet          Node test-preload-740345 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     83s                kubelet          Node test-preload-740345 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  83s                kubelet          Node test-preload-740345 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s                kubelet          Node test-preload-740345 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 83s                kubelet          Starting kubelet.
	  Normal   NodeReady                82s                kubelet          Node test-preload-740345 status is now: NodeReady
	  Normal   RegisteredNode           79s                node-controller  Node test-preload-740345 event: Registered Node test-preload-740345 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-740345 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-740345 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-740345 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 15s                kubelet          Node test-preload-740345 has been rebooted, boot id: 652a509d-5be0-4822-82e4-9d554f1dd00b
	  Normal   RegisteredNode           12s                node-controller  Node test-preload-740345 event: Registered Node test-preload-740345 in Controller
	
	
	==> dmesg <==
	[Oct29 09:13] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001476] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001292] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.013064] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.104671] kauditd_printk_skb: 88 callbacks suppressed
	[  +6.557735] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.000052] kauditd_printk_skb: 128 callbacks suppressed
	[  +6.298451] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [1f702cca5dfb5498206d6b6325476266a31f162beb8766487e833eb399303306] <==
	{"level":"info","ts":"2025-10-29T09:13:38.556384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 switched to configuration voters=(808613133158692504)"}
	{"level":"info","ts":"2025-10-29T09:13:38.562218Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","added-peer-id":"b38c55c42a3b698","added-peer-peer-urls":["https://192.168.39.180:2380"]}
	{"level":"info","ts":"2025-10-29T09:13:38.562317Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:13:38.562357Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-29T09:13:38.566867Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-29T09:13:38.568368Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"b38c55c42a3b698","initial-advertise-peer-urls":["https://192.168.39.180:2380"],"listen-peer-urls":["https://192.168.39.180:2380"],"advertise-client-urls":["https://192.168.39.180:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.180:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-29T09:13:38.568422Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-29T09:13:38.568507Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2025-10-29T09:13:38.568812Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2025-10-29T09:13:39.830913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-29T09:13:39.830950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-29T09:13:39.830993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 received MsgPreVoteResp from b38c55c42a3b698 at term 2"}
	{"level":"info","ts":"2025-10-29T09:13:39.831005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became candidate at term 3"}
	{"level":"info","ts":"2025-10-29T09:13:39.831014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 received MsgVoteResp from b38c55c42a3b698 at term 3"}
	{"level":"info","ts":"2025-10-29T09:13:39.831023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became leader at term 3"}
	{"level":"info","ts":"2025-10-29T09:13:39.831029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b38c55c42a3b698 elected leader b38c55c42a3b698 at term 3"}
	{"level":"info","ts":"2025-10-29T09:13:39.833047Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"b38c55c42a3b698","local-member-attributes":"{Name:test-preload-740345 ClientURLs:[https://192.168.39.180:2379]}","request-path":"/0/members/b38c55c42a3b698/attributes","cluster-id":"5a7d3c553a64e690","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-29T09:13:39.833114Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:13:39.833125Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-29T09:13:39.833258Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-29T09:13:39.834556Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-29T09:13:39.834292Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-29T09:13:39.835012Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-29T09:13:39.835297Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.180:2379"}
	{"level":"info","ts":"2025-10-29T09:13:39.835548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:13:56 up 0 min,  0 users,  load average: 0.77, 0.20, 0.07
	Linux test-preload-740345 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1808a8fcb2ac8940e5f31324eed42305d68618a4a9d38a94f276f79a6bdb76e4] <==
	I1029 09:13:40.953580       1 shared_informer.go:320] Caches are synced for configmaps
	I1029 09:13:40.953639       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:13:40.957528       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1029 09:13:40.957558       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1029 09:13:40.957712       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 09:13:40.961142       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1029 09:13:40.976473       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1029 09:13:40.976661       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E1029 09:13:40.991339       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:13:41.006830       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1029 09:13:41.009175       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1029 09:13:41.009349       1 aggregator.go:171] initial CRD sync complete...
	I1029 09:13:41.009358       1 autoregister_controller.go:144] Starting autoregister controller
	I1029 09:13:41.009363       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1029 09:13:41.009367       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:13:41.058002       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1029 09:13:41.858524       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:13:41.878591       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1029 09:13:42.630455       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1029 09:13:42.661458       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1029 09:13:42.690500       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:13:42.696995       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:13:44.219230       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:13:44.513491       1 controller.go:615] quota admission added evaluator for: endpoints
	I1029 09:13:44.616674       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b6d86974aa1b9fcc1278d73f53dde3e5e8703c805dc419e46b1581ee66899fdb] <==
	I1029 09:13:44.219205       1 shared_informer.go:320] Caches are synced for namespace
	I1029 09:13:44.221006       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-740345"
	I1029 09:13:44.222671       1 shared_informer.go:320] Caches are synced for resource quota
	I1029 09:13:44.238199       1 shared_informer.go:320] Caches are synced for resource quota
	I1029 09:13:44.242566       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1029 09:13:44.245853       1 shared_informer.go:320] Caches are synced for crt configmap
	I1029 09:13:44.247015       1 shared_informer.go:320] Caches are synced for GC
	I1029 09:13:44.249369       1 shared_informer.go:320] Caches are synced for garbage collector
	I1029 09:13:44.249404       1 shared_informer.go:320] Caches are synced for daemon sets
	I1029 09:13:44.250587       1 shared_informer.go:320] Caches are synced for deployment
	I1029 09:13:44.261231       1 shared_informer.go:320] Caches are synced for garbage collector
	I1029 09:13:44.261376       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:13:44.261386       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:13:44.261478       1 shared_informer.go:320] Caches are synced for disruption
	I1029 09:13:44.261491       1 shared_informer.go:320] Caches are synced for TTL
	I1029 09:13:44.261953       1 shared_informer.go:320] Caches are synced for persistent volume
	I1029 09:13:44.264352       1 shared_informer.go:320] Caches are synced for job
	I1029 09:13:44.626858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="414.972089ms"
	I1029 09:13:44.626948       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="60.907µs"
	I1029 09:13:50.009473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="59.045µs"
	I1029 09:13:51.018754       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.005088ms"
	I1029 09:13:51.018882       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="41.167µs"
	I1029 09:13:51.376599       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-740345"
	I1029 09:13:51.386521       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-740345"
	I1029 09:13:54.198961       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d6a5430829926dd545d52c4c084ca5370cc1ea77f4f488500bc4f9e407fc7ba0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1029 09:13:42.489552       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1029 09:13:42.500147       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.180"]
	E1029 09:13:42.500256       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:13:42.552053       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1029 09:13:42.552097       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1029 09:13:42.552121       1 server_linux.go:170] "Using iptables Proxier"
	I1029 09:13:42.555277       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:13:42.555546       1 server.go:497] "Version info" version="v1.32.0"
	I1029 09:13:42.555683       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:13:42.558323       1 config.go:329] "Starting node config controller"
	I1029 09:13:42.563820       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1029 09:13:42.558445       1 config.go:199] "Starting service config controller"
	I1029 09:13:42.563847       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1029 09:13:42.558450       1 config.go:105] "Starting endpoint slice config controller"
	I1029 09:13:42.563855       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1029 09:13:42.664154       1 shared_informer.go:320] Caches are synced for node config
	I1029 09:13:42.664244       1 shared_informer.go:320] Caches are synced for service config
	I1029 09:13:42.664294       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8ed13601a59afc5bc27985b34ca61717e6e670f864a5bd8da3e2ea8824397655] <==
	I1029 09:13:39.030996       1 serving.go:386] Generated self-signed cert in-memory
	I1029 09:13:41.001824       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1029 09:13:41.001858       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:13:41.010362       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1029 09:13:41.010451       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1029 09:13:41.010479       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1029 09:13:41.010498       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:13:41.013285       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:13:41.013574       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1029 09:13:41.014118       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:13:41.015855       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1029 09:13:41.111601       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I1029 09:13:41.115041       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1029 09:13:41.116411       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 29 09:13:41 test-preload-740345 kubelet[1152]: I1029 09:13:41.044688    1152 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 29 09:13:41 test-preload-740345 kubelet[1152]: I1029 09:13:41.046052    1152 setters.go:602] "Node became not ready" node="test-preload-740345" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-29T09:13:41Z","lastTransitionTime":"2025-10-29T09:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Oct 29 09:13:41 test-preload-740345 kubelet[1152]: E1029 09:13:41.047850    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-740345\" already exists" pod="kube-system/kube-controller-manager-test-preload-740345"
	Oct 29 09:13:41 test-preload-740345 kubelet[1152]: I1029 09:13:41.047937    1152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-740345"
	Oct 29 09:13:41 test-preload-740345 kubelet[1152]: E1029 09:13:41.070092    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-740345\" already exists" pod="kube-system/etcd-test-preload-740345"
	Oct 29 09:13:41 test-preload-740345 kubelet[1152]: I1029 09:13:41.844678    1152 apiserver.go:52] "Watching apiserver"
	Oct 29 09:13:41 test-preload-740345 kubelet[1152]: E1029 09:13:41.849695    1152 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-7pr7c" podUID="e38b44e5-195c-465c-99a8-b9dd9726567e"
	Oct 29 09:13:41 test-preload-740345 kubelet[1152]: I1029 09:13:41.858343    1152 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 29 09:13:41 test-preload-740345 kubelet[1152]: I1029 09:13:41.873868    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/80de636d-a6df-4725-b56e-a06d6da4f7e4-tmp\") pod \"storage-provisioner\" (UID: \"80de636d-a6df-4725-b56e-a06d6da4f7e4\") " pod="kube-system/storage-provisioner"
	Oct 29 09:13:41 test-preload-740345 kubelet[1152]: I1029 09:13:41.874104    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/853c4890-c413-40ea-92de-346ded482c87-xtables-lock\") pod \"kube-proxy-z2rqp\" (UID: \"853c4890-c413-40ea-92de-346ded482c87\") " pod="kube-system/kube-proxy-z2rqp"
	Oct 29 09:13:41 test-preload-740345 kubelet[1152]: I1029 09:13:41.874129    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/853c4890-c413-40ea-92de-346ded482c87-lib-modules\") pod \"kube-proxy-z2rqp\" (UID: \"853c4890-c413-40ea-92de-346ded482c87\") " pod="kube-system/kube-proxy-z2rqp"
	Oct 29 09:13:41 test-preload-740345 kubelet[1152]: E1029 09:13:41.874565    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 29 09:13:41 test-preload-740345 kubelet[1152]: E1029 09:13:41.874617    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e38b44e5-195c-465c-99a8-b9dd9726567e-config-volume podName:e38b44e5-195c-465c-99a8-b9dd9726567e nodeName:}" failed. No retries permitted until 2025-10-29 09:13:42.374601165 +0000 UTC m=+6.621898364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e38b44e5-195c-465c-99a8-b9dd9726567e-config-volume") pod "coredns-668d6bf9bc-7pr7c" (UID: "e38b44e5-195c-465c-99a8-b9dd9726567e") : object "kube-system"/"coredns" not registered
	Oct 29 09:13:42 test-preload-740345 kubelet[1152]: E1029 09:13:42.376656    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 29 09:13:42 test-preload-740345 kubelet[1152]: E1029 09:13:42.376741    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e38b44e5-195c-465c-99a8-b9dd9726567e-config-volume podName:e38b44e5-195c-465c-99a8-b9dd9726567e nodeName:}" failed. No retries permitted until 2025-10-29 09:13:43.376725467 +0000 UTC m=+7.624022654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e38b44e5-195c-465c-99a8-b9dd9726567e-config-volume") pod "coredns-668d6bf9bc-7pr7c" (UID: "e38b44e5-195c-465c-99a8-b9dd9726567e") : object "kube-system"/"coredns" not registered
	Oct 29 09:13:43 test-preload-740345 kubelet[1152]: E1029 09:13:43.387338    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 29 09:13:43 test-preload-740345 kubelet[1152]: E1029 09:13:43.387445    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e38b44e5-195c-465c-99a8-b9dd9726567e-config-volume podName:e38b44e5-195c-465c-99a8-b9dd9726567e nodeName:}" failed. No retries permitted until 2025-10-29 09:13:45.387428562 +0000 UTC m=+9.634725761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e38b44e5-195c-465c-99a8-b9dd9726567e-config-volume") pod "coredns-668d6bf9bc-7pr7c" (UID: "e38b44e5-195c-465c-99a8-b9dd9726567e") : object "kube-system"/"coredns" not registered
	Oct 29 09:13:43 test-preload-740345 kubelet[1152]: E1029 09:13:43.882700    1152 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-7pr7c" podUID="e38b44e5-195c-465c-99a8-b9dd9726567e"
	Oct 29 09:13:45 test-preload-740345 kubelet[1152]: E1029 09:13:45.399965    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 29 09:13:45 test-preload-740345 kubelet[1152]: E1029 09:13:45.400052    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e38b44e5-195c-465c-99a8-b9dd9726567e-config-volume podName:e38b44e5-195c-465c-99a8-b9dd9726567e nodeName:}" failed. No retries permitted until 2025-10-29 09:13:49.4000384 +0000 UTC m=+13.647335588 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e38b44e5-195c-465c-99a8-b9dd9726567e-config-volume") pod "coredns-668d6bf9bc-7pr7c" (UID: "e38b44e5-195c-465c-99a8-b9dd9726567e") : object "kube-system"/"coredns" not registered
	Oct 29 09:13:45 test-preload-740345 kubelet[1152]: E1029 09:13:45.884562    1152 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-7pr7c" podUID="e38b44e5-195c-465c-99a8-b9dd9726567e"
	Oct 29 09:13:45 test-preload-740345 kubelet[1152]: E1029 09:13:45.926418    1152 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761729225926119406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 29 09:13:45 test-preload-740345 kubelet[1152]: E1029 09:13:45.926457    1152 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761729225926119406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 29 09:13:55 test-preload-740345 kubelet[1152]: E1029 09:13:55.931115    1152 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761729235930099859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 29 09:13:55 test-preload-740345 kubelet[1152]: E1029 09:13:55.931315    1152 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761729235930099859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [122616bb256c09bde83e8cd75bac29a870c2e04d75d57f70e3165b077c8eeea1] <==
	I1029 09:13:42.359858       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-740345 -n test-preload-740345
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-740345 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-740345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-740345
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-740345: (1.024558381s)
--- FAIL: TestPreload (142.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (66.02s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-893324 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-893324 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.377659442s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-893324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-893324" primary control-plane node in "pause-893324" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-893324" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:21:00.652463  171268 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:21:00.652654  171268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:21:00.652671  171268 out.go:374] Setting ErrFile to fd 2...
	I1029 09:21:00.652678  171268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:21:00.652964  171268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
	I1029 09:21:00.653611  171268 out.go:368] Setting JSON to false
	I1029 09:21:00.654993  171268 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7390,"bootTime":1761722271,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:21:00.655127  171268 start.go:143] virtualization: kvm guest
	I1029 09:21:00.656823  171268 out.go:179] * [pause-893324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:21:00.657899  171268 notify.go:221] Checking for updates...
	I1029 09:21:00.657923  171268 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:21:00.659067  171268 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:21:00.660149  171268 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	I1029 09:21:00.661357  171268 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	I1029 09:21:00.662263  171268 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:21:00.663316  171268 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:21:00.665025  171268 config.go:182] Loaded profile config "pause-893324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:21:00.665431  171268 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:21:00.697855  171268 out.go:179] * Using the kvm2 driver based on existing profile
	I1029 09:21:00.698777  171268 start.go:309] selected driver: kvm2
	I1029 09:21:00.698790  171268 start.go:930] validating driver "kvm2" against &{Name:pause-893324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-893324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.89 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:21:00.698906  171268 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:21:00.699909  171268 cni.go:84] Creating CNI manager for ""
	I1029 09:21:00.699969  171268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1029 09:21:00.700039  171268 start.go:353] cluster config:
	{Name:pause-893324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-893324 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.89 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:21:00.700224  171268 iso.go:125] acquiring lock: {Name:mk91f2a3d67828aaa5b9f798c71cdbe9317767a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:21:00.701525  171268 out.go:179] * Starting "pause-893324" primary control-plane node in "pause-893324" cluster
	I1029 09:21:00.702731  171268 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:21:00.702771  171268 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-137232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:21:00.702780  171268 cache.go:59] Caching tarball of preloaded images
	I1029 09:21:00.702884  171268 preload.go:233] Found /home/jenkins/minikube-integration/21800-137232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:21:00.702899  171268 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:21:00.703036  171268 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/pause-893324/config.json ...
	I1029 09:21:00.703281  171268 start.go:360] acquireMachinesLock for pause-893324: {Name:mkcf4e1d7f2bf8251db3d5b4273e9a32697d7a63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1029 09:21:00.703332  171268 start.go:364] duration metric: took 30.13µs to acquireMachinesLock for "pause-893324"
	I1029 09:21:00.703350  171268 start.go:96] Skipping create...Using existing machine configuration
	I1029 09:21:00.703357  171268 fix.go:54] fixHost starting: 
	I1029 09:21:00.705185  171268 fix.go:112] recreateIfNeeded on pause-893324: state=Running err=<nil>
	W1029 09:21:00.705226  171268 fix.go:138] unexpected machine state, will restart: <nil>
	I1029 09:21:00.706845  171268 out.go:252] * Updating the running kvm2 "pause-893324" VM ...
	I1029 09:21:00.706871  171268 machine.go:94] provisionDockerMachine start ...
	I1029 09:21:00.709680  171268 main.go:143] libmachine: domain pause-893324 has defined MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:00.710112  171268 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:12:a9", ip: ""} in network mk-pause-893324: {Iface:virbr2 ExpiryTime:2025-10-29 10:19:55 +0000 UTC Type:0 Mac:52:54:00:1e:12:a9 Iaid: IPaddr:192.168.50.89 Prefix:24 Hostname:pause-893324 Clientid:01:52:54:00:1e:12:a9}
	I1029 09:21:00.710136  171268 main.go:143] libmachine: domain pause-893324 has defined IP address 192.168.50.89 and MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:00.710358  171268 main.go:143] libmachine: Using SSH client type: native
	I1029 09:21:00.710737  171268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.89 22 <nil> <nil>}
	I1029 09:21:00.710760  171268 main.go:143] libmachine: About to run SSH command:
	hostname
	I1029 09:21:00.821864  171268 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-893324
	
	I1029 09:21:00.821912  171268 buildroot.go:166] provisioning hostname "pause-893324"
	I1029 09:21:00.825292  171268 main.go:143] libmachine: domain pause-893324 has defined MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:00.825744  171268 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:12:a9", ip: ""} in network mk-pause-893324: {Iface:virbr2 ExpiryTime:2025-10-29 10:19:55 +0000 UTC Type:0 Mac:52:54:00:1e:12:a9 Iaid: IPaddr:192.168.50.89 Prefix:24 Hostname:pause-893324 Clientid:01:52:54:00:1e:12:a9}
	I1029 09:21:00.825783  171268 main.go:143] libmachine: domain pause-893324 has defined IP address 192.168.50.89 and MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:00.825980  171268 main.go:143] libmachine: Using SSH client type: native
	I1029 09:21:00.826246  171268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.89 22 <nil> <nil>}
	I1029 09:21:00.826263  171268 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-893324 && echo "pause-893324" | sudo tee /etc/hostname
	I1029 09:21:00.959924  171268 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-893324
	
	I1029 09:21:00.963287  171268 main.go:143] libmachine: domain pause-893324 has defined MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:00.963719  171268 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:12:a9", ip: ""} in network mk-pause-893324: {Iface:virbr2 ExpiryTime:2025-10-29 10:19:55 +0000 UTC Type:0 Mac:52:54:00:1e:12:a9 Iaid: IPaddr:192.168.50.89 Prefix:24 Hostname:pause-893324 Clientid:01:52:54:00:1e:12:a9}
	I1029 09:21:00.963767  171268 main.go:143] libmachine: domain pause-893324 has defined IP address 192.168.50.89 and MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:00.963947  171268 main.go:143] libmachine: Using SSH client type: native
	I1029 09:21:00.964243  171268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.89 22 <nil> <nil>}
	I1029 09:21:00.964268  171268 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-893324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-893324/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-893324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1029 09:21:01.073172  171268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1029 09:21:01.073210  171268 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21800-137232/.minikube CaCertPath:/home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21800-137232/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21800-137232/.minikube}
	I1029 09:21:01.073237  171268 buildroot.go:174] setting up certificates
	I1029 09:21:01.073275  171268 provision.go:84] configureAuth start
	I1029 09:21:01.076748  171268 main.go:143] libmachine: domain pause-893324 has defined MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:01.077198  171268 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:12:a9", ip: ""} in network mk-pause-893324: {Iface:virbr2 ExpiryTime:2025-10-29 10:19:55 +0000 UTC Type:0 Mac:52:54:00:1e:12:a9 Iaid: IPaddr:192.168.50.89 Prefix:24 Hostname:pause-893324 Clientid:01:52:54:00:1e:12:a9}
	I1029 09:21:01.077234  171268 main.go:143] libmachine: domain pause-893324 has defined IP address 192.168.50.89 and MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:01.079948  171268 main.go:143] libmachine: domain pause-893324 has defined MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:01.080319  171268 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:12:a9", ip: ""} in network mk-pause-893324: {Iface:virbr2 ExpiryTime:2025-10-29 10:19:55 +0000 UTC Type:0 Mac:52:54:00:1e:12:a9 Iaid: IPaddr:192.168.50.89 Prefix:24 Hostname:pause-893324 Clientid:01:52:54:00:1e:12:a9}
	I1029 09:21:01.080337  171268 main.go:143] libmachine: domain pause-893324 has defined IP address 192.168.50.89 and MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:01.080494  171268 provision.go:143] copyHostCerts
	I1029 09:21:01.080560  171268 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-137232/.minikube/ca.pem, removing ...
	I1029 09:21:01.080582  171268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-137232/.minikube/ca.pem
	I1029 09:21:01.080642  171268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21800-137232/.minikube/ca.pem (1082 bytes)
	I1029 09:21:01.080753  171268 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-137232/.minikube/cert.pem, removing ...
	I1029 09:21:01.080763  171268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-137232/.minikube/cert.pem
	I1029 09:21:01.080799  171268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21800-137232/.minikube/cert.pem (1123 bytes)
	I1029 09:21:01.080924  171268 exec_runner.go:144] found /home/jenkins/minikube-integration/21800-137232/.minikube/key.pem, removing ...
	I1029 09:21:01.080937  171268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21800-137232/.minikube/key.pem
	I1029 09:21:01.080967  171268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21800-137232/.minikube/key.pem (1675 bytes)
	I1029 09:21:01.081073  171268 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21800-137232/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca-key.pem org=jenkins.pause-893324 san=[127.0.0.1 192.168.50.89 localhost minikube pause-893324]
	I1029 09:21:01.365381  171268 provision.go:177] copyRemoteCerts
	I1029 09:21:01.365471  171268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1029 09:21:01.367941  171268 main.go:143] libmachine: domain pause-893324 has defined MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:01.368355  171268 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:12:a9", ip: ""} in network mk-pause-893324: {Iface:virbr2 ExpiryTime:2025-10-29 10:19:55 +0000 UTC Type:0 Mac:52:54:00:1e:12:a9 Iaid: IPaddr:192.168.50.89 Prefix:24 Hostname:pause-893324 Clientid:01:52:54:00:1e:12:a9}
	I1029 09:21:01.368375  171268 main.go:143] libmachine: domain pause-893324 has defined IP address 192.168.50.89 and MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:01.368503  171268 sshutil.go:53] new ssh client: &{IP:192.168.50.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/pause-893324/id_rsa Username:docker}
	I1029 09:21:01.464985  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1029 09:21:01.502708  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1029 09:21:01.538733  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1029 09:21:01.576333  171268 provision.go:87] duration metric: took 503.022384ms to configureAuth
	I1029 09:21:01.576370  171268 buildroot.go:189] setting minikube options for container-runtime
	I1029 09:21:01.576687  171268 config.go:182] Loaded profile config "pause-893324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:21:01.580316  171268 main.go:143] libmachine: domain pause-893324 has defined MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:01.580929  171268 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:12:a9", ip: ""} in network mk-pause-893324: {Iface:virbr2 ExpiryTime:2025-10-29 10:19:55 +0000 UTC Type:0 Mac:52:54:00:1e:12:a9 Iaid: IPaddr:192.168.50.89 Prefix:24 Hostname:pause-893324 Clientid:01:52:54:00:1e:12:a9}
	I1029 09:21:01.580976  171268 main.go:143] libmachine: domain pause-893324 has defined IP address 192.168.50.89 and MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:01.581274  171268 main.go:143] libmachine: Using SSH client type: native
	I1029 09:21:01.581592  171268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.89 22 <nil> <nil>}
	I1029 09:21:01.581615  171268 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1029 09:21:07.175527  171268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1029 09:21:07.175555  171268 machine.go:97] duration metric: took 6.468675734s to provisionDockerMachine
	I1029 09:21:07.175571  171268 start.go:293] postStartSetup for "pause-893324" (driver="kvm2")
	I1029 09:21:07.175585  171268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1029 09:21:07.175659  171268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1029 09:21:07.179468  171268 main.go:143] libmachine: domain pause-893324 has defined MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:07.180041  171268 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:12:a9", ip: ""} in network mk-pause-893324: {Iface:virbr2 ExpiryTime:2025-10-29 10:19:55 +0000 UTC Type:0 Mac:52:54:00:1e:12:a9 Iaid: IPaddr:192.168.50.89 Prefix:24 Hostname:pause-893324 Clientid:01:52:54:00:1e:12:a9}
	I1029 09:21:07.180074  171268 main.go:143] libmachine: domain pause-893324 has defined IP address 192.168.50.89 and MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:07.180309  171268 sshutil.go:53] new ssh client: &{IP:192.168.50.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/pause-893324/id_rsa Username:docker}
	I1029 09:21:07.267180  171268 ssh_runner.go:195] Run: cat /etc/os-release
	I1029 09:21:07.272927  171268 info.go:137] Remote host: Buildroot 2025.02
	I1029 09:21:07.272955  171268 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-137232/.minikube/addons for local assets ...
	I1029 09:21:07.273020  171268 filesync.go:126] Scanning /home/jenkins/minikube-integration/21800-137232/.minikube/files for local assets ...
	I1029 09:21:07.273118  171268 filesync.go:149] local asset: /home/jenkins/minikube-integration/21800-137232/.minikube/files/etc/ssl/certs/1412312.pem -> 1412312.pem in /etc/ssl/certs
	I1029 09:21:07.273285  171268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1029 09:21:07.288868  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/files/etc/ssl/certs/1412312.pem --> /etc/ssl/certs/1412312.pem (1708 bytes)
	I1029 09:21:07.330274  171268 start.go:296] duration metric: took 154.685745ms for postStartSetup
	I1029 09:21:07.330319  171268 fix.go:56] duration metric: took 6.626961561s for fixHost
	I1029 09:21:07.333843  171268 main.go:143] libmachine: domain pause-893324 has defined MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:07.334369  171268 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:12:a9", ip: ""} in network mk-pause-893324: {Iface:virbr2 ExpiryTime:2025-10-29 10:19:55 +0000 UTC Type:0 Mac:52:54:00:1e:12:a9 Iaid: IPaddr:192.168.50.89 Prefix:24 Hostname:pause-893324 Clientid:01:52:54:00:1e:12:a9}
	I1029 09:21:07.334417  171268 main.go:143] libmachine: domain pause-893324 has defined IP address 192.168.50.89 and MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:07.334653  171268 main.go:143] libmachine: Using SSH client type: native
	I1029 09:21:07.334927  171268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.89 22 <nil> <nil>}
	I1029 09:21:07.334945  171268 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1029 09:21:07.455852  171268 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761729667.450792191
	
	I1029 09:21:07.455882  171268 fix.go:216] guest clock: 1761729667.450792191
	I1029 09:21:07.455895  171268 fix.go:229] Guest: 2025-10-29 09:21:07.450792191 +0000 UTC Remote: 2025-10-29 09:21:07.33032445 +0000 UTC m=+6.742235053 (delta=120.467741ms)
	I1029 09:21:07.455920  171268 fix.go:200] guest clock delta is within tolerance: 120.467741ms
	I1029 09:21:07.455928  171268 start.go:83] releasing machines lock for "pause-893324", held for 6.752584596s
	I1029 09:21:07.459987  171268 main.go:143] libmachine: domain pause-893324 has defined MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:07.460598  171268 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:12:a9", ip: ""} in network mk-pause-893324: {Iface:virbr2 ExpiryTime:2025-10-29 10:19:55 +0000 UTC Type:0 Mac:52:54:00:1e:12:a9 Iaid: IPaddr:192.168.50.89 Prefix:24 Hostname:pause-893324 Clientid:01:52:54:00:1e:12:a9}
	I1029 09:21:07.460638  171268 main.go:143] libmachine: domain pause-893324 has defined IP address 192.168.50.89 and MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:07.461335  171268 ssh_runner.go:195] Run: cat /version.json
	I1029 09:21:07.461507  171268 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1029 09:21:07.465522  171268 main.go:143] libmachine: domain pause-893324 has defined MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:07.466093  171268 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:12:a9", ip: ""} in network mk-pause-893324: {Iface:virbr2 ExpiryTime:2025-10-29 10:19:55 +0000 UTC Type:0 Mac:52:54:00:1e:12:a9 Iaid: IPaddr:192.168.50.89 Prefix:24 Hostname:pause-893324 Clientid:01:52:54:00:1e:12:a9}
	I1029 09:21:07.466142  171268 main.go:143] libmachine: domain pause-893324 has defined IP address 192.168.50.89 and MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:07.466519  171268 sshutil.go:53] new ssh client: &{IP:192.168.50.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/pause-893324/id_rsa Username:docker}
	I1029 09:21:07.466954  171268 main.go:143] libmachine: domain pause-893324 has defined MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:07.467878  171268 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:12:a9", ip: ""} in network mk-pause-893324: {Iface:virbr2 ExpiryTime:2025-10-29 10:19:55 +0000 UTC Type:0 Mac:52:54:00:1e:12:a9 Iaid: IPaddr:192.168.50.89 Prefix:24 Hostname:pause-893324 Clientid:01:52:54:00:1e:12:a9}
	I1029 09:21:07.467918  171268 main.go:143] libmachine: domain pause-893324 has defined IP address 192.168.50.89 and MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:07.468242  171268 sshutil.go:53] new ssh client: &{IP:192.168.50.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/pause-893324/id_rsa Username:docker}
	I1029 09:21:07.556594  171268 ssh_runner.go:195] Run: systemctl --version
	I1029 09:21:07.582692  171268 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1029 09:21:07.742853  171268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1029 09:21:07.753090  171268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1029 09:21:07.753189  171268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1029 09:21:07.768349  171268 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1029 09:21:07.768383  171268 start.go:496] detecting cgroup driver to use...
	I1029 09:21:07.768510  171268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1029 09:21:07.796032  171268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1029 09:21:07.821941  171268 docker.go:218] disabling cri-docker service (if available) ...
	I1029 09:21:07.822034  171268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1029 09:21:07.852337  171268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1029 09:21:07.874631  171268 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1029 09:21:08.088981  171268 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1029 09:21:08.322156  171268 docker.go:234] disabling docker service ...
	I1029 09:21:08.322226  171268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1029 09:21:08.355456  171268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1029 09:21:08.374275  171268 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1029 09:21:08.609793  171268 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1029 09:21:08.804811  171268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1029 09:21:08.822777  171268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1029 09:21:08.848065  171268 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1029 09:21:08.848187  171268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:21:08.861844  171268 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1029 09:21:08.861919  171268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:21:08.876993  171268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:21:08.891079  171268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:21:08.903334  171268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1029 09:21:08.917997  171268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:21:08.933570  171268 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:21:08.948319  171268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1029 09:21:08.962499  171268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1029 09:21:08.977786  171268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1029 09:21:08.991974  171268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:21:09.192143  171268 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1029 09:21:09.431072  171268 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:21:09.431164  171268 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:21:09.443363  171268 start.go:564] Will wait 60s for crictl version
	I1029 09:21:09.443469  171268 ssh_runner.go:195] Run: which crictl
	I1029 09:21:09.451543  171268 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1029 09:21:09.634554  171268 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1029 09:21:09.634682  171268 ssh_runner.go:195] Run: crio --version
	I1029 09:21:09.785705  171268 ssh_runner.go:195] Run: crio --version
	I1029 09:21:09.877045  171268 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1029 09:21:09.881745  171268 main.go:143] libmachine: domain pause-893324 has defined MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:09.882281  171268 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:12:a9", ip: ""} in network mk-pause-893324: {Iface:virbr2 ExpiryTime:2025-10-29 10:19:55 +0000 UTC Type:0 Mac:52:54:00:1e:12:a9 Iaid: IPaddr:192.168.50.89 Prefix:24 Hostname:pause-893324 Clientid:01:52:54:00:1e:12:a9}
	I1029 09:21:09.882314  171268 main.go:143] libmachine: domain pause-893324 has defined IP address 192.168.50.89 and MAC address 52:54:00:1e:12:a9 in network mk-pause-893324
	I1029 09:21:09.882560  171268 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1029 09:21:09.896028  171268 kubeadm.go:884] updating cluster {Name:pause-893324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-893324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.89 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:21:09.896293  171268 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:21:09.896372  171268 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:21:10.038937  171268 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:21:10.038962  171268 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:21:10.039017  171268 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:21:10.138907  171268 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:21:10.138935  171268 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:21:10.138946  171268 kubeadm.go:935] updating node { 192.168.50.89 8443 v1.34.1 crio true true} ...
	I1029 09:21:10.139091  171268 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-893324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-893324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:21:10.139175  171268 ssh_runner.go:195] Run: crio config
	I1029 09:21:10.321806  171268 cni.go:84] Creating CNI manager for ""
	I1029 09:21:10.321837  171268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1029 09:21:10.321861  171268 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:21:10.321893  171268 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.89 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-893324 NodeName:pause-893324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:21:10.322092  171268 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-893324"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.89"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.89"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:21:10.322181  171268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:21:10.364918  171268 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:21:10.364995  171268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:21:10.404401  171268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1029 09:21:10.458387  171268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:21:10.523587  171268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1029 09:21:10.592736  171268 ssh_runner.go:195] Run: grep 192.168.50.89	control-plane.minikube.internal$ /etc/hosts
	I1029 09:21:10.603741  171268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:21:10.928607  171268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:21:10.966255  171268 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/pause-893324 for IP: 192.168.50.89
	I1029 09:21:10.966304  171268 certs.go:195] generating shared ca certs ...
	I1029 09:21:10.966330  171268 certs.go:227] acquiring lock for ca certs: {Name:mk7a2a9c7bc52f8ce34b75ca46a18294b750be87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:21:10.966542  171268 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-137232/.minikube/ca.key
	I1029 09:21:10.966605  171268 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.key
	I1029 09:21:10.966632  171268 certs.go:257] generating profile certs ...
	I1029 09:21:10.966759  171268 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/pause-893324/client.key
	I1029 09:21:10.966842  171268 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/pause-893324/apiserver.key.67dd0a5f
	I1029 09:21:10.966907  171268 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/pause-893324/proxy-client.key
	I1029 09:21:10.967073  171268 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/141231.pem (1338 bytes)
	W1029 09:21:10.967123  171268 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-137232/.minikube/certs/141231_empty.pem, impossibly tiny 0 bytes
	I1029 09:21:10.967482  171268 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:21:10.967566  171268 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:21:10.967607  171268 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:21:10.967637  171268 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/key.pem (1675 bytes)
	I1029 09:21:10.967704  171268 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/files/etc/ssl/certs/1412312.pem (1708 bytes)
	I1029 09:21:10.971715  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:21:11.086355  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1029 09:21:11.147481  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:21:11.201872  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:21:11.279890  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/pause-893324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1029 09:21:11.327146  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/pause-893324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1029 09:21:11.373373  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/pause-893324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:21:11.417987  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/pause-893324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:21:11.473637  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:21:11.534132  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/certs/141231.pem --> /usr/share/ca-certificates/141231.pem (1338 bytes)
	I1029 09:21:11.597359  171268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/files/etc/ssl/certs/1412312.pem --> /usr/share/ca-certificates/1412312.pem (1708 bytes)
	I1029 09:21:11.658514  171268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:21:11.695582  171268 ssh_runner.go:195] Run: openssl version
	I1029 09:21:11.709075  171268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:21:11.728739  171268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:21:11.739435  171268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:21:11.739525  171268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:21:11.762731  171268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:21:11.802113  171268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141231.pem && ln -fs /usr/share/ca-certificates/141231.pem /etc/ssl/certs/141231.pem"
	I1029 09:21:11.831395  171268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141231.pem
	I1029 09:21:11.838978  171268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:30 /usr/share/ca-certificates/141231.pem
	I1029 09:21:11.839052  171268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141231.pem
	I1029 09:21:11.854353  171268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141231.pem /etc/ssl/certs/51391683.0"
	I1029 09:21:11.877375  171268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1412312.pem && ln -fs /usr/share/ca-certificates/1412312.pem /etc/ssl/certs/1412312.pem"
	I1029 09:21:11.901515  171268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1412312.pem
	I1029 09:21:11.912836  171268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:30 /usr/share/ca-certificates/1412312.pem
	I1029 09:21:11.912938  171268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1412312.pem
	I1029 09:21:11.928482  171268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1412312.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:21:11.952478  171268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:21:11.963594  171268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:21:11.975531  171268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:21:11.987860  171268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:21:11.997880  171268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:21:12.005849  171268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:21:12.016489  171268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:21:12.025105  171268 kubeadm.go:401] StartCluster: {Name:pause-893324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-893324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.89 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:21:12.025273  171268 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:21:12.025369  171268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:21:12.084701  171268 cri.go:89] found id: "fcd2754e2b05f6e7010d6e581df54d3b15960c3fd5f83d643680c89d03910321"
	I1029 09:21:12.084722  171268 cri.go:89] found id: "ec8e0520a183227d620ba7f06e6fcbf656a495bd062f664b388192571a6ac865"
	I1029 09:21:12.084728  171268 cri.go:89] found id: "e396e3fde16e099c7d87988760f795ac52fc48097e29b7b15c95f311da21681a"
	I1029 09:21:12.084733  171268 cri.go:89] found id: "978058fbf1426a69d6043d86933e5e3e34440b82ba8161506c60e7c7270ab8cb"
	I1029 09:21:12.084738  171268 cri.go:89] found id: "a0dd75a034ccc17affda4545c12744727c0bc851b283773181a99feaafa121e3"
	I1029 09:21:12.084743  171268 cri.go:89] found id: "356b9115644ec8e980fd5232f9fd2fae910e8634d4dbbe358a4c53e4c81c9e6e"
	I1029 09:21:12.084747  171268 cri.go:89] found id: "9e9f60c62409e9e48ce8dd931024f3cc4162f35867dfee520c2b3d4edb46afd9"
	I1029 09:21:12.084751  171268 cri.go:89] found id: "46323d524b00d4303550365430bf5ce01e10bf57c1b6689d8a0bbdc77246b12f"
	I1029 09:21:12.084755  171268 cri.go:89] found id: "8b4c0afcb723c088cf7c5e84b22d8e0fbe1f699162c6d3e02843c76a87ca7a21"
	I1029 09:21:12.084765  171268 cri.go:89] found id: "03a777e5c25345a53a3550ef35c59e2135efdd49afc9fad0044b752ba9a69177"
	I1029 09:21:12.084783  171268 cri.go:89] found id: "eb598db2c93291e6869c7c427a429ff6b728d2ec68fcab6074303d1667ebe873"
	I1029 09:21:12.084787  171268 cri.go:89] found id: "814dec54d394fbb51cc7577856dd744516ea6b3d27c586ca3d454f20e2caf3ad"
	I1029 09:21:12.084791  171268 cri.go:89] found id: ""
	I1029 09:21:12.084841  171268 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-893324 -n pause-893324
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-893324 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-893324 logs -n 25: (3.047458199s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p force-systemd-flag-964043                                                                                                                                                                                            │ force-systemd-flag-964043 │ jenkins │ v1.37.0 │ 29 Oct 25 09:18 UTC │ 29 Oct 25 09:18 UTC │
	│ start   │ -p cert-options-611904 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-611904       │ jenkins │ v1.37.0 │ 29 Oct 25 09:18 UTC │ 29 Oct 25 09:19 UTC │
	│ ssh     │ -p NoKubernetes-598598 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-598598       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-642154                                                                                                                                                                                            │ kubernetes-upgrade-642154 │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:19 UTC │
	│ stop    │ -p NoKubernetes-598598                                                                                                                                                                                                  │ NoKubernetes-598598       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:19 UTC │
	│ start   │ -p NoKubernetes-598598 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-598598       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:19 UTC │
	│ start   │ -p running-upgrade-882934 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                                                                                      │ running-upgrade-882934    │ jenkins │ v1.32.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:20 UTC │
	│ ssh     │ -p NoKubernetes-598598 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-598598       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │                     │
	│ delete  │ -p NoKubernetes-598598                                                                                                                                                                                                  │ NoKubernetes-598598       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:19 UTC │
	│ ssh     │ cert-options-611904 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-611904       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:19 UTC │
	│ ssh     │ -p cert-options-611904 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-611904       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:19 UTC │
	│ delete  │ -p cert-options-611904                                                                                                                                                                                                  │ cert-options-611904       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:19 UTC │
	│ start   │ -p pause-893324 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-893324              │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:21 UTC │
	│ start   │ -p stopped-upgrade-317680 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-317680    │ jenkins │ v1.32.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:21 UTC │
	│ start   │ -p running-upgrade-882934 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ running-upgrade-882934    │ jenkins │ v1.37.0 │ 29 Oct 25 09:20 UTC │ 29 Oct 25 09:21 UTC │
	│ start   │ -p pause-893324 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-893324              │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │ 29 Oct 25 09:21 UTC │
	│ stop    │ stopped-upgrade-317680 stop                                                                                                                                                                                             │ stopped-upgrade-317680    │ jenkins │ v1.32.0 │ 29 Oct 25 09:21 UTC │ 29 Oct 25 09:21 UTC │
	│ start   │ -p stopped-upgrade-317680 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ stopped-upgrade-317680    │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │ 29 Oct 25 09:21 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-882934 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ running-upgrade-882934    │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │                     │
	│ delete  │ -p running-upgrade-882934                                                                                                                                                                                               │ running-upgrade-882934    │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │ 29 Oct 25 09:21 UTC │
	│ start   │ -p cert-expiration-042301 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                                                                 │ cert-expiration-042301    │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │                     │
	│ start   │ -p auto-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-588311               │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-317680 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-317680    │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │                     │
	│ delete  │ -p stopped-upgrade-317680                                                                                                                                                                                               │ stopped-upgrade-317680    │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │ 29 Oct 25 09:21 UTC │
	│ start   │ -p kindnet-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                                                                                  │ kindnet-588311            │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:21:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:21:46.192895  171947 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:21:46.193084  171947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:21:46.193101  171947 out.go:374] Setting ErrFile to fd 2...
	I1029 09:21:46.193109  171947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:21:46.193396  171947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
	I1029 09:21:46.194042  171947 out.go:368] Setting JSON to false
	I1029 09:21:46.195011  171947 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7435,"bootTime":1761722271,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:21:46.195106  171947 start.go:143] virtualization: kvm guest
	I1029 09:21:46.196718  171947 out.go:179] * [kindnet-588311] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:21:46.198373  171947 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:21:46.198398  171947 notify.go:221] Checking for updates...
	I1029 09:21:46.200670  171947 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:21:46.202056  171947 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	I1029 09:21:46.203853  171947 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	I1029 09:21:46.205008  171947 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:21:46.206014  171947 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:21:46.207697  171947 config.go:182] Loaded profile config "auto-588311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:21:46.207864  171947 config.go:182] Loaded profile config "cert-expiration-042301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:21:46.208004  171947 config.go:182] Loaded profile config "guest-549168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1029 09:21:46.208234  171947 config.go:182] Loaded profile config "pause-893324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:21:46.208362  171947 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:21:46.255940  171947 out.go:179] * Using the kvm2 driver based on user configuration
	I1029 09:21:46.256972  171947 start.go:309] selected driver: kvm2
	I1029 09:21:46.257005  171947 start.go:930] validating driver "kvm2" against <nil>
	I1029 09:21:46.257026  171947 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:21:46.258247  171947 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 09:21:46.258722  171947 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:21:46.258777  171947 cni.go:84] Creating CNI manager for "kindnet"
	I1029 09:21:46.258790  171947 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1029 09:21:46.258844  171947 start.go:353] cluster config:
	{Name:kindnet-588311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-588311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:21:46.258996  171947 iso.go:125] acquiring lock: {Name:mk91f2a3d67828aaa5b9f798c71cdbe9317767a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:21:46.260293  171947 out.go:179] * Starting "kindnet-588311" primary control-plane node in "kindnet-588311" cluster
	I1029 09:21:45.992226  171268 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1029 09:21:46.009278  171268 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1029 09:21:46.038542  171268 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:21:46.045577  171268 system_pods.go:59] 6 kube-system pods found
	I1029 09:21:46.045628  171268 system_pods.go:61] "coredns-66bc5c9577-mbnml" [55421df5-01e8-405b-9e7e-3043102dd93a] Running
	I1029 09:21:46.045645  171268 system_pods.go:61] "etcd-pause-893324" [ece4a623-4172-48ec-9200-8358c928f5ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:21:46.045657  171268 system_pods.go:61] "kube-apiserver-pause-893324" [9e0d59e7-e4e5-46b0-bf36-1a867f51fc6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:21:46.045671  171268 system_pods.go:61] "kube-controller-manager-pause-893324" [ceac297d-7418-451c-8530-118345523cfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:21:46.045678  171268 system_pods.go:61] "kube-proxy-dpg4s" [5f8132f7-9f54-4f52-955c-530a0a9bac9f] Running
	I1029 09:21:46.045687  171268 system_pods.go:61] "kube-scheduler-pause-893324" [c6e30fc9-ff3d-4721-82fa-27e9b31d2ccd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:21:46.045698  171268 system_pods.go:74] duration metric: took 7.120377ms to wait for pod list to return data ...
	I1029 09:21:46.045718  171268 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:21:46.049214  171268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1029 09:21:46.049256  171268 node_conditions.go:123] node cpu capacity is 2
	I1029 09:21:46.049271  171268 node_conditions.go:105] duration metric: took 3.545914ms to run NodePressure ...
	I1029 09:21:46.049331  171268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1029 09:21:46.309944  171268 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1029 09:21:46.313451  171268 kubeadm.go:744] kubelet initialised
	I1029 09:21:46.313479  171268 kubeadm.go:745] duration metric: took 3.49538ms waiting for restarted kubelet to initialise ...
	I1029 09:21:46.313510  171268 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1029 09:21:46.329012  171268 ops.go:34] apiserver oom_adj: -16
	I1029 09:21:46.329044  171268 kubeadm.go:602] duration metric: took 34.144827306s to restartPrimaryControlPlane
	I1029 09:21:46.329059  171268 kubeadm.go:403] duration metric: took 34.303972448s to StartCluster
	I1029 09:21:46.329087  171268 settings.go:142] acquiring lock: {Name:mkf57999febc1e58dfdf035d9c465d8b8e2fde1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:21:46.329194  171268 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21800-137232/kubeconfig
	I1029 09:21:46.330646  171268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/kubeconfig: {Name:mk5d77803dd54d458a7a9c3d32d70e7b02c64781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:21:46.330996  171268 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.89 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:21:46.331053  171268 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1029 09:21:46.331269  171268 config.go:182] Loaded profile config "pause-893324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:21:46.332515  171268 out.go:179] * Verifying Kubernetes components...
	I1029 09:21:46.332516  171268 out.go:179] * Enabled addons: 
	I1029 09:21:48.251759  171589 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.385607528s)
	I1029 09:21:48.251782  171589 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1029 09:21:48.251835  171589 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1029 09:21:48.257117  171589 start.go:564] Will wait 60s for crictl version
	I1029 09:21:48.257174  171589 ssh_runner.go:195] Run: which crictl
	I1029 09:21:48.261089  171589 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1029 09:21:48.293795  171589 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1029 09:21:48.293888  171589 ssh_runner.go:195] Run: crio --version
	I1029 09:21:48.322369  171589 ssh_runner.go:195] Run: crio --version
	I1029 09:21:48.350784  171589 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1029 09:21:48.354994  171589 main.go:143] libmachine: domain cert-expiration-042301 has defined MAC address 52:54:00:bf:95:31 in network mk-cert-expiration-042301
	I1029 09:21:48.355400  171589 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bf:95:31", ip: ""} in network mk-cert-expiration-042301: {Iface:virbr1 ExpiryTime:2025-10-29 10:17:54 +0000 UTC Type:0 Mac:52:54:00:bf:95:31 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:cert-expiration-042301 Clientid:01:52:54:00:bf:95:31}
	I1029 09:21:48.355439  171589 main.go:143] libmachine: domain cert-expiration-042301 has defined IP address 192.168.39.175 and MAC address 52:54:00:bf:95:31 in network mk-cert-expiration-042301
	I1029 09:21:48.355620  171589 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1029 09:21:48.360117  171589 kubeadm.go:884] updating cluster {Name:cert-expiration-042301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.34.1 ClusterName:cert-expiration-042301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1029 09:21:48.360200  171589 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:21:48.360252  171589 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:21:48.406714  171589 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:21:48.406726  171589 crio.go:433] Images already preloaded, skipping extraction
	I1029 09:21:48.406777  171589 ssh_runner.go:195] Run: sudo crictl images --output json
	I1029 09:21:48.440137  171589 crio.go:514] all images are preloaded for cri-o runtime.
	I1029 09:21:48.440152  171589 cache_images.go:86] Images are preloaded, skipping loading
	I1029 09:21:48.440161  171589 kubeadm.go:935] updating node { 192.168.39.175 8443 v1.34.1 crio true true} ...
	I1029 09:21:48.440376  171589 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-042301 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-042301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1029 09:21:48.440462  171589 ssh_runner.go:195] Run: crio config
	I1029 09:21:48.486232  171589 cni.go:84] Creating CNI manager for ""
	I1029 09:21:48.486256  171589 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1029 09:21:48.486277  171589 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1029 09:21:48.486304  171589 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.175 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-042301 NodeName:cert-expiration-042301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1029 09:21:48.486472  171589 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-042301"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.175"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.175"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1029 09:21:48.486558  171589 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1029 09:21:48.498874  171589 binaries.go:44] Found k8s binaries, skipping transfer
	I1029 09:21:48.498924  171589 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1029 09:21:48.511613  171589 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1029 09:21:48.531818  171589 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1029 09:21:48.550629  171589 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1029 09:21:48.569642  171589 ssh_runner.go:195] Run: grep 192.168.39.175	control-plane.minikube.internal$ /etc/hosts
	I1029 09:21:48.574856  171589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:21:48.781143  171589 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:21:48.798989  171589 certs.go:69] Setting up /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301 for IP: 192.168.39.175
	I1029 09:21:48.799006  171589 certs.go:195] generating shared ca certs ...
	I1029 09:21:48.799030  171589 certs.go:227] acquiring lock for ca certs: {Name:mk7a2a9c7bc52f8ce34b75ca46a18294b750be87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:21:48.799253  171589 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21800-137232/.minikube/ca.key
	I1029 09:21:48.799320  171589 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.key
	I1029 09:21:48.799331  171589 certs.go:257] generating profile certs ...
	W1029 09:21:48.799555  171589 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1029 09:21:48.799583  171589 certs.go:624] cert expired /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/client.crt: expiration: 2025-10-29 09:21:02 +0000 UTC, now: 2025-10-29 09:21:48.79957579 +0000 UTC m=+29.345373160
	I1029 09:21:48.799757  171589 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/client.key
	I1029 09:21:48.799783  171589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/client.crt with IP's: []
	I1029 09:21:49.049087  171589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/client.crt ...
	I1029 09:21:49.049105  171589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/client.crt: {Name:mkd3621c4d39a9fff6c599f295a174afa005bdae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:21:49.049246  171589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/client.key ...
	I1029 09:21:49.049254  171589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/client.key: {Name:mk09eec7e8b02cf9fa84866add435fc40ac88f43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1029 09:21:49.049421  171589 out.go:285] ! Certificate apiserver.crt.bf3f6da2 has expired. Generating a new one...
	I1029 09:21:49.049439  171589 certs.go:624] cert expired /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/apiserver.crt.bf3f6da2: expiration: 2025-10-29 09:21:02 +0000 UTC, now: 2025-10-29 09:21:49.049433904 +0000 UTC m=+29.595231274
	I1029 09:21:49.049512  171589 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/apiserver.key.bf3f6da2
	I1029 09:21:49.049527  171589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/apiserver.crt.bf3f6da2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.175]
	I1029 09:21:49.297928  171589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/apiserver.crt.bf3f6da2 ...
	I1029 09:21:49.297945  171589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/apiserver.crt.bf3f6da2: {Name:mk750365490bf018026b8e3146657134192e25ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:21:49.298101  171589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/apiserver.key.bf3f6da2 ...
	I1029 09:21:49.298111  171589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/apiserver.key.bf3f6da2: {Name:mk8dedc19f12b18fd88f775b0bedc74203687f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:21:49.298192  171589 certs.go:382] copying /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/apiserver.crt.bf3f6da2 -> /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/apiserver.crt
	I1029 09:21:49.298370  171589 certs.go:386] copying /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/apiserver.key.bf3f6da2 -> /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/apiserver.key
	W1029 09:21:49.298573  171589 out.go:285] ! Certificate proxy-client.crt has expired. Generating a new one...
	I1029 09:21:49.298593  171589 certs.go:624] cert expired /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/proxy-client.crt: expiration: 2025-10-29 09:21:02 +0000 UTC, now: 2025-10-29 09:21:49.298587491 +0000 UTC m=+29.844384860
	I1029 09:21:49.298665  171589 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/proxy-client.key
	I1029 09:21:49.298682  171589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/proxy-client.crt with IP's: []
	I1029 09:21:49.387254  171589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/proxy-client.crt ...
	I1029 09:21:49.387270  171589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/proxy-client.crt: {Name:mk5b5f0293638c591b17bf6b3c50fd7ab785c6e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:21:49.387411  171589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/proxy-client.key ...
	I1029 09:21:49.387419  171589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/proxy-client.key: {Name:mk03f417bf10dd503f3ea72df7fa334a7909579c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:21:49.387576  171589 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/141231.pem (1338 bytes)
	W1029 09:21:49.387609  171589 certs.go:480] ignoring /home/jenkins/minikube-integration/21800-137232/.minikube/certs/141231_empty.pem, impossibly tiny 0 bytes
	I1029 09:21:49.387617  171589 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca-key.pem (1679 bytes)
	I1029 09:21:49.387635  171589 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/ca.pem (1082 bytes)
	I1029 09:21:49.387653  171589 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/cert.pem (1123 bytes)
	I1029 09:21:49.387670  171589 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/certs/key.pem (1675 bytes)
	I1029 09:21:49.387702  171589 certs.go:484] found cert: /home/jenkins/minikube-integration/21800-137232/.minikube/files/etc/ssl/certs/1412312.pem (1708 bytes)
	I1029 09:21:49.388216  171589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1029 09:21:49.417936  171589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1029 09:21:49.444899  171589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1029 09:21:49.471722  171589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1029 09:21:49.497793  171589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1029 09:21:45.999778  171624 main.go:143] libmachine: domain auto-588311 has defined MAC address 52:54:00:be:24:94 in network mk-auto-588311
	I1029 09:21:46.000668  171624 main.go:143] libmachine: no network interface addresses found for domain auto-588311 (source=lease)
	I1029 09:21:46.000688  171624 main.go:143] libmachine: trying to list again with source=arp
	I1029 09:21:46.001281  171624 main.go:143] libmachine: unable to find current IP address of domain auto-588311 in network mk-auto-588311 (interfaces detected: [])
	I1029 09:21:46.001323  171624 retry.go:31] will retry after 2.834650373s: waiting for domain to come up
	I1029 09:21:48.837287  171624 main.go:143] libmachine: domain auto-588311 has defined MAC address 52:54:00:be:24:94 in network mk-auto-588311
	I1029 09:21:48.838032  171624 main.go:143] libmachine: no network interface addresses found for domain auto-588311 (source=lease)
	I1029 09:21:48.838049  171624 main.go:143] libmachine: trying to list again with source=arp
	I1029 09:21:48.838392  171624 main.go:143] libmachine: unable to find current IP address of domain auto-588311 in network mk-auto-588311 (interfaces detected: [])
	I1029 09:21:48.838451  171624 retry.go:31] will retry after 3.534943822s: waiting for domain to come up
	I1029 09:21:46.333792  171268 addons.go:515] duration metric: took 2.757517ms for enable addons: enabled=[]
	I1029 09:21:46.333849  171268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1029 09:21:46.539668  171268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1029 09:21:46.560292  171268 node_ready.go:35] waiting up to 6m0s for node "pause-893324" to be "Ready" ...
	I1029 09:21:46.563871  171268 node_ready.go:49] node "pause-893324" is "Ready"
	I1029 09:21:46.563917  171268 node_ready.go:38] duration metric: took 3.575722ms for node "pause-893324" to be "Ready" ...
	I1029 09:21:46.563934  171268 api_server.go:52] waiting for apiserver process to appear ...
	I1029 09:21:46.563987  171268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:21:46.592945  171268 api_server.go:72] duration metric: took 261.897605ms to wait for apiserver process to appear ...
	I1029 09:21:46.592978  171268 api_server.go:88] waiting for apiserver healthz status ...
	I1029 09:21:46.593001  171268 api_server.go:253] Checking apiserver healthz at https://192.168.50.89:8443/healthz ...
	I1029 09:21:46.599163  171268 api_server.go:279] https://192.168.50.89:8443/healthz returned 200:
	ok
	I1029 09:21:46.600576  171268 api_server.go:141] control plane version: v1.34.1
	I1029 09:21:46.600604  171268 api_server.go:131] duration metric: took 7.61901ms to wait for apiserver health ...
	I1029 09:21:46.600618  171268 system_pods.go:43] waiting for kube-system pods to appear ...
	I1029 09:21:46.604724  171268 system_pods.go:59] 6 kube-system pods found
	I1029 09:21:46.604754  171268 system_pods.go:61] "coredns-66bc5c9577-mbnml" [55421df5-01e8-405b-9e7e-3043102dd93a] Running
	I1029 09:21:46.604764  171268 system_pods.go:61] "etcd-pause-893324" [ece4a623-4172-48ec-9200-8358c928f5ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:21:46.604770  171268 system_pods.go:61] "kube-apiserver-pause-893324" [9e0d59e7-e4e5-46b0-bf36-1a867f51fc6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:21:46.604779  171268 system_pods.go:61] "kube-controller-manager-pause-893324" [ceac297d-7418-451c-8530-118345523cfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:21:46.604783  171268 system_pods.go:61] "kube-proxy-dpg4s" [5f8132f7-9f54-4f52-955c-530a0a9bac9f] Running
	I1029 09:21:46.604790  171268 system_pods.go:61] "kube-scheduler-pause-893324" [c6e30fc9-ff3d-4721-82fa-27e9b31d2ccd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:21:46.604798  171268 system_pods.go:74] duration metric: took 4.174108ms to wait for pod list to return data ...
	I1029 09:21:46.604808  171268 default_sa.go:34] waiting for default service account to be created ...
	I1029 09:21:46.607800  171268 default_sa.go:45] found service account: "default"
	I1029 09:21:46.607821  171268 default_sa.go:55] duration metric: took 3.003845ms for default service account to be created ...
	I1029 09:21:46.607833  171268 system_pods.go:116] waiting for k8s-apps to be running ...
	I1029 09:21:46.611935  171268 system_pods.go:86] 6 kube-system pods found
	I1029 09:21:46.611965  171268 system_pods.go:89] "coredns-66bc5c9577-mbnml" [55421df5-01e8-405b-9e7e-3043102dd93a] Running
	I1029 09:21:46.611979  171268 system_pods.go:89] "etcd-pause-893324" [ece4a623-4172-48ec-9200-8358c928f5ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1029 09:21:46.611989  171268 system_pods.go:89] "kube-apiserver-pause-893324" [9e0d59e7-e4e5-46b0-bf36-1a867f51fc6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1029 09:21:46.612004  171268 system_pods.go:89] "kube-controller-manager-pause-893324" [ceac297d-7418-451c-8530-118345523cfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1029 09:21:46.612011  171268 system_pods.go:89] "kube-proxy-dpg4s" [5f8132f7-9f54-4f52-955c-530a0a9bac9f] Running
	I1029 09:21:46.612024  171268 system_pods.go:89] "kube-scheduler-pause-893324" [c6e30fc9-ff3d-4721-82fa-27e9b31d2ccd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1029 09:21:46.612035  171268 system_pods.go:126] duration metric: took 4.19522ms to wait for k8s-apps to be running ...
	I1029 09:21:46.612050  171268 system_svc.go:44] waiting for kubelet service to be running ....
	I1029 09:21:46.612113  171268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:21:46.629753  171268 system_svc.go:56] duration metric: took 17.690879ms WaitForService to wait for kubelet
	I1029 09:21:46.629792  171268 kubeadm.go:587] duration metric: took 298.751594ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:21:46.629821  171268 node_conditions.go:102] verifying NodePressure condition ...
	I1029 09:21:46.632631  171268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1029 09:21:46.632661  171268 node_conditions.go:123] node cpu capacity is 2
	I1029 09:21:46.632680  171268 node_conditions.go:105] duration metric: took 2.850134ms to run NodePressure ...
	I1029 09:21:46.632697  171268 start.go:242] waiting for startup goroutines ...
	I1029 09:21:46.632708  171268 start.go:247] waiting for cluster config update ...
	I1029 09:21:46.632725  171268 start.go:256] writing updated cluster config ...
	I1029 09:21:46.633061  171268 ssh_runner.go:195] Run: rm -f paused
	I1029 09:21:46.640739  171268 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1029 09:21:46.641555  171268 kapi.go:59] client config for pause-893324: &rest.Config{Host:"https://192.168.50.89:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21800-137232/.minikube/profiles/pause-893324/client.crt", KeyFile:"/home/jenkins/minikube-integration/21800-137232/.minikube/profiles/pause-893324/client.key", CAFile:"/home/jenkins/minikube-integration/21800-137232/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1029 09:21:46.644996  171268 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mbnml" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:21:46.650750  171268 pod_ready.go:94] pod "coredns-66bc5c9577-mbnml" is "Ready"
	I1029 09:21:46.650784  171268 pod_ready.go:86] duration metric: took 5.763987ms for pod "coredns-66bc5c9577-mbnml" in "kube-system" namespace to be "Ready" or be gone ...
	I1029 09:21:46.653854  171268 pod_ready.go:83] waiting for pod "etcd-pause-893324" in "kube-system" namespace to be "Ready" or be gone ...
	W1029 09:21:48.660288  171268 pod_ready.go:104] pod "etcd-pause-893324" is not "Ready", error: <nil>
	I1029 09:21:46.261208  171947 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 09:21:46.261272  171947 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-137232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 09:21:46.261287  171947 cache.go:59] Caching tarball of preloaded images
	I1029 09:21:46.261392  171947 preload.go:233] Found /home/jenkins/minikube-integration/21800-137232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1029 09:21:46.261422  171947 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1029 09:21:46.261559  171947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/config.json ...
	I1029 09:21:46.261591  171947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/config.json: {Name:mk831ee53c47f0e002ee863cb4b96a2274f1032c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1029 09:21:46.261787  171947 start.go:360] acquireMachinesLock for kindnet-588311: {Name:mkcf4e1d7f2bf8251db3d5b4273e9a32697d7a63 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1029 09:21:53.849024  171947 start.go:364] duration metric: took 7.587204695s to acquireMachinesLock for "kindnet-588311"
	I1029 09:21:53.849251  171947 start.go:93] Provisioning new machine with config: &{Name:kindnet-588311 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:kindnet-588311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1029 09:21:53.849381  171947 start.go:125] createHost starting for "" (driver="kvm2")
	I1029 09:21:49.524511  171589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1029 09:21:49.551756  171589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1029 09:21:49.579203  171589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/cert-expiration-042301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1029 09:21:49.605465  171589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1029 09:21:49.632311  171589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/certs/141231.pem --> /usr/share/ca-certificates/141231.pem (1338 bytes)
	I1029 09:21:49.658699  171589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21800-137232/.minikube/files/etc/ssl/certs/1412312.pem --> /usr/share/ca-certificates/1412312.pem (1708 bytes)
	I1029 09:21:49.686332  171589 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1029 09:21:49.705887  171589 ssh_runner.go:195] Run: openssl version
	I1029 09:21:49.711967  171589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1029 09:21:49.726337  171589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:21:49.731561  171589 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 29 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:21:49.731601  171589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1029 09:21:49.738955  171589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1029 09:21:49.749685  171589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141231.pem && ln -fs /usr/share/ca-certificates/141231.pem /etc/ssl/certs/141231.pem"
	I1029 09:21:49.762054  171589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141231.pem
	I1029 09:21:49.767443  171589 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 29 08:30 /usr/share/ca-certificates/141231.pem
	I1029 09:21:49.767479  171589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141231.pem
	I1029 09:21:49.774444  171589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141231.pem /etc/ssl/certs/51391683.0"
	I1029 09:21:49.787127  171589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1412312.pem && ln -fs /usr/share/ca-certificates/1412312.pem /etc/ssl/certs/1412312.pem"
	I1029 09:21:49.799159  171589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1412312.pem
	I1029 09:21:49.804622  171589 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 29 08:30 /usr/share/ca-certificates/1412312.pem
	I1029 09:21:49.804653  171589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1412312.pem
	I1029 09:21:49.811680  171589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1412312.pem /etc/ssl/certs/3ec20f2e.0"
	I1029 09:21:49.822997  171589 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1029 09:21:49.828441  171589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1029 09:21:49.835749  171589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1029 09:21:49.842662  171589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1029 09:21:49.849779  171589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1029 09:21:49.856520  171589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1029 09:21:49.863249  171589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1029 09:21:49.870083  171589 kubeadm.go:401] StartCluster: {Name:cert-expiration-042301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.1 ClusterName:cert-expiration-042301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 09:21:49.870147  171589 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1029 09:21:49.870213  171589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1029 09:21:49.907306  171589 cri.go:89] found id: "d790c1b305ba8b7aa5fc115a49007ce49c5c9540660c891f7df252f01b90c8d7"
	I1029 09:21:49.907318  171589 cri.go:89] found id: "252ff8d83c911d21268cbeec6f2ca969c1d90d0e9ccb93b6678aff1831235a4d"
	I1029 09:21:49.907321  171589 cri.go:89] found id: "ce6fabc9128ad474f3b95ac4b47f1ab09aa7f3aedc1cc209f88bb6832a1804e5"
	I1029 09:21:49.907323  171589 cri.go:89] found id: "e4434c2c229d4f1d5438635b0bf6cf8fc9e7e436e5d73bc01dbfdd507fb6063a"
	I1029 09:21:49.907324  171589 cri.go:89] found id: "dad4f168aefd9d46d2d45cdf209847bc36dfe9d92171dcf868f93e68316f8a8f"
	I1029 09:21:49.907326  171589 cri.go:89] found id: "2063cb7287dbeeb0b9df4368d45defa040347e9719ffc96a91b0c4dc4d50836d"
	I1029 09:21:49.907328  171589 cri.go:89] found id: "c08f9efe64ce4286949a53a2d3459d1673f61b01b4ec24c3b43f3f18a9bc0aae"
	I1029 09:21:49.907329  171589 cri.go:89] found id: "037eebfda3bf84b6da44b4730265f67da4f92f076f5590b6bbb0e6f45fe43c39"
	I1029 09:21:49.907330  171589 cri.go:89] found id: "42220699a66fd6efd5ee27b77d346c4b9c916cc7375f445410b75bf4ce18eb53"
	I1029 09:21:49.907336  171589 cri.go:89] found id: "d3d919d1eb78bb985f5a1f4e71bd5ac87f954378ac34c20a0815255eafbcadf7"
	I1029 09:21:49.907338  171589 cri.go:89] found id: "9d06fbcaff8b27ee7d02691e3bda04d8998dff114bdc031b12d9a86c12c3f916"
	I1029 09:21:49.907340  171589 cri.go:89] found id: "311fd4bea9bf4b619fab9050dbfecf6d735853d1ac868bb6053193db0ac7cc78"
	I1029 09:21:49.907342  171589 cri.go:89] found id: "0022d5927a542e53fda6889386ad0a6f9fa2e5d59c59d082dba75a00c267d493"
	I1029 09:21:49.907343  171589 cri.go:89] found id: ""
	I1029 09:21:49.907383  171589 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-893324 -n pause-893324
helpers_test.go:269: (dbg) Run:  kubectl --context pause-893324 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-893324 -n pause-893324
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-893324 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-893324 logs -n 25: (2.38313963s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p NoKubernetes-598598 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-598598       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-642154                                                                                                                                │ kubernetes-upgrade-642154 │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:19 UTC │
	│ stop    │ -p NoKubernetes-598598                                                                                                                                      │ NoKubernetes-598598       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:19 UTC │
	│ start   │ -p NoKubernetes-598598 --driver=kvm2  --container-runtime=crio                                                                                              │ NoKubernetes-598598       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:19 UTC │
	│ start   │ -p running-upgrade-882934 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ running-upgrade-882934    │ jenkins │ v1.32.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:20 UTC │
	│ ssh     │ -p NoKubernetes-598598 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-598598       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │                     │
	│ delete  │ -p NoKubernetes-598598                                                                                                                                      │ NoKubernetes-598598       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:19 UTC │
	│ ssh     │ cert-options-611904 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                 │ cert-options-611904       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:19 UTC │
	│ ssh     │ -p cert-options-611904 -- sudo cat /etc/kubernetes/admin.conf                                                                                               │ cert-options-611904       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:19 UTC │
	│ delete  │ -p cert-options-611904                                                                                                                                      │ cert-options-611904       │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:19 UTC │
	│ start   │ -p pause-893324 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-893324              │ jenkins │ v1.37.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:21 UTC │
	│ start   │ -p stopped-upgrade-317680 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ stopped-upgrade-317680    │ jenkins │ v1.32.0 │ 29 Oct 25 09:19 UTC │ 29 Oct 25 09:21 UTC │
	│ start   │ -p running-upgrade-882934 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ running-upgrade-882934    │ jenkins │ v1.37.0 │ 29 Oct 25 09:20 UTC │ 29 Oct 25 09:21 UTC │
	│ start   │ -p pause-893324 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-893324              │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │ 29 Oct 25 09:21 UTC │
	│ stop    │ stopped-upgrade-317680 stop                                                                                                                                 │ stopped-upgrade-317680    │ jenkins │ v1.32.0 │ 29 Oct 25 09:21 UTC │ 29 Oct 25 09:21 UTC │
	│ start   │ -p stopped-upgrade-317680 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-317680    │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │ 29 Oct 25 09:21 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-882934 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ running-upgrade-882934    │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │                     │
	│ delete  │ -p running-upgrade-882934                                                                                                                                   │ running-upgrade-882934    │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │ 29 Oct 25 09:21 UTC │
	│ start   │ -p cert-expiration-042301 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                     │ cert-expiration-042301    │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │ 29 Oct 25 09:22 UTC │
	│ start   │ -p auto-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                       │ auto-588311               │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-317680 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-317680    │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │                     │
	│ delete  │ -p stopped-upgrade-317680                                                                                                                                   │ stopped-upgrade-317680    │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │ 29 Oct 25 09:21 UTC │
	│ start   │ -p kindnet-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-588311            │ jenkins │ v1.37.0 │ 29 Oct 25 09:21 UTC │                     │
	│ delete  │ -p cert-expiration-042301                                                                                                                                   │ cert-expiration-042301    │ jenkins │ v1.37.0 │ 29 Oct 25 09:22 UTC │ 29 Oct 25 09:22 UTC │
	│ start   │ -p calico-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio                        │ calico-588311             │ jenkins │ v1.37.0 │ 29 Oct 25 09:22 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 09:22:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 09:22:03.072297  172279 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:22:03.072481  172279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:22:03.072496  172279 out.go:374] Setting ErrFile to fd 2...
	I1029 09:22:03.072501  172279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:22:03.072827  172279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
	I1029 09:22:03.073599  172279 out.go:368] Setting JSON to false
	I1029 09:22:03.074869  172279 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7452,"bootTime":1761722271,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:22:03.074999  172279 start.go:143] virtualization: kvm guest
	I1029 09:22:03.076698  172279 out.go:179] * [calico-588311] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:22:03.077891  172279 notify.go:221] Checking for updates...
	I1029 09:22:03.077905  172279 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:22:03.079187  172279 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:22:03.080247  172279 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	I1029 09:22:03.081427  172279 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	I1029 09:22:03.082545  172279 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:22:03.083933  172279 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:22:03.085497  172279 config.go:182] Loaded profile config "auto-588311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:22:03.085612  172279 config.go:182] Loaded profile config "guest-549168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1029 09:22:03.085696  172279 config.go:182] Loaded profile config "kindnet-588311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:22:03.085793  172279 config.go:182] Loaded profile config "pause-893324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:22:03.085901  172279 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:22:03.127940  172279 out.go:179] * Using the kvm2 driver based on user configuration
	I1029 09:22:03.128932  172279 start.go:309] selected driver: kvm2
	I1029 09:22:03.128949  172279 start.go:930] validating driver "kvm2" against <nil>
	I1029 09:22:03.128964  172279 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:22:03.130046  172279 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 09:22:03.130397  172279 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1029 09:22:03.130464  172279 cni.go:84] Creating CNI manager for "calico"
	I1029 09:22:03.130478  172279 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1029 09:22:03.130536  172279 start.go:353] cluster config:
	{Name:calico-588311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-588311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1029 09:22:03.130663  172279 iso.go:125] acquiring lock: {Name:mk91f2a3d67828aaa5b9f798c71cdbe9317767a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 09:22:03.131934  172279 out.go:179] * Starting "calico-588311" primary control-plane node in "calico-588311" cluster
	
	
	==> CRI-O <==
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.229579517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761729724229559635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29cb2164-d045-4f12-8045-ccd09b6c1501 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.230180852Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e19c266-2e5b-4899-b4ad-e42a38d30429 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.230254708Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e19c266-2e5b-4899-b4ad-e42a38d30429 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.230520578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a7c24ada82354252cb66a59fee6906bea1753d5ccda6ac63d1ad202ca3ba6a0,PodSandboxId:8999e69316f7a7633b1e1c160264cc860eb9d99bb0db8f0635abf7dba3f05339,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761729701404973668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cffe6a3081c1b78af5ea0d943026dd,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a02d15b16ad3c67a3633ac6435beb5ac8c22cf04abd6013596672a275dd612fb,PodSandboxId:ea2ba9c2c531e3cc2962958848da4fa34ae7b66903602bb093b23627926f2975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761729701411871970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0982e617a8fee3e09b26bd173358c235,},Annotations:map[string]string{io
.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22c10d8c5a7c3ead98af69a3d786354abb8cc82b5f1423a8a935875cb39bad2,PodSandboxId:e1bb2f22e314cfaf737e8e8cec7e11b39452f0f4072c6a2eab483e70a145a217,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761729701380317870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893324,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dab438294c3ac2591ee4d5e4a4a1192,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d20b1c7c467c25702547b329e476cf01e6d70b701649662b039db08d0b32f6,PodSandboxId:ffeec05e47c6b08459815516011445be572be3168427237065383eb1f66d3959,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761729701370019813,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04b0b35390cedc3ee1faf204552e81c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:100cbc4316bc7d501b6b10f4a8006b0ef5aa5632a8126bc2413b7eb1f321cf64,PodSandboxId:e83f0a8c8fff8d09155a6a7a94b55e6fb1538a91049d60c42d89013787100ef5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Sta
te:CONTAINER_RUNNING,CreatedAt:1761729693164137072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpg4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8132f7-9f54-4f52-955c-530a0a9bac9f,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:424dd140b4a3e95d08671486ecc075a0dcd83954d6f4de1fad7aad1dd5b6abd2,PodSandboxId:b5b54570218d7a3f634e959852e14832243bf51e443ab1c8673213453001f405,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17617
29691594233791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mbnml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55421df5-01e8-405b-9e7e-3043102dd93a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec8e0520a183227d620ba7f06e6fcbf656a495bd062f664b388192571a6ac865,PodSandboxId:e83f0a8c8fff8d09155a6a7a94b55e6fb1538a91049d60
c42d89013787100ef5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761729670189752000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpg4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8132f7-9f54-4f52-955c-530a0a9bac9f,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd2754e2b05f6e7010d6e581df54d3b15960c3fd5f83d643680c89d03910321,PodSandboxId:b5b54570218d7a3f634e959852e14832243bf51e443ab1c8673213453001f405,Metadata:&Conta
inerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761729671049675928,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mbnml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55421df5-01e8-405b-9e7e-3043102dd93a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e396e3fde16e099c7d87988760f795ac52fc48097e29b7b15c95f311da21681a,PodSandboxId:ffeec05e47c6b08459815516011445be572be3168427237065383eb1f66d3959,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761729670091242140,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04b0b35390cedc3ee1faf204552e81c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0dd75a034ccc17affda4545c12744727c0bc851b283773181a99feaafa121e3,PodSandboxId:e1bb2f22e314cfaf737e8e8cec7e11b39452f0f4072c6a2eab483e70a145a217,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761729670061595063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dab438294c3ac2591ee4d5e4a4a1192,},Annotations:map[string]string{io
.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978058fbf1426a69d6043d86933e5e3e34440b82ba8161506c60e7c7270ab8cb,PodSandboxId:8999e69316f7a7633b1e1c160264cc860eb9d99bb0db8f0635abf7dba3f05339,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761729670081445852,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893324,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 38cffe6a3081c1b78af5ea0d943026dd,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:356b9115644ec8e980fd5232f9fd2fae910e8634d4dbbe358a4c53e4c81c9e6e,PodSandboxId:ea2ba9c2c531e3cc2962958848da4fa34ae7b66903602bb093b23627926f2975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761729669934974026,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0982e617a8fee3e09b26bd173358c235,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e19c266-2e5b-4899-b4ad-e42a38d30429 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.278682651Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3128c685-6c63-4211-a2c7-d360ff612035 name=/runtime.v1.RuntimeService/Version
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.278763209Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3128c685-6c63-4211-a2c7-d360ff612035 name=/runtime.v1.RuntimeService/Version
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.281551665Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79d6a68b-7d95-46aa-8860-d139f38b9660 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.282704782Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761729724282677542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79d6a68b-7d95-46aa-8860-d139f38b9660 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.284171241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a97f9707-9aec-43b2-863a-a3d6bdfb4408 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.284379674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a97f9707-9aec-43b2-863a-a3d6bdfb4408 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.284833081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a7c24ada82354252cb66a59fee6906bea1753d5ccda6ac63d1ad202ca3ba6a0,PodSandboxId:8999e69316f7a7633b1e1c160264cc860eb9d99bb0db8f0635abf7dba3f05339,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761729701404973668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cffe6a3081c1b78af5ea0d943026dd,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a02d15b16ad3c67a3633ac6435beb5ac8c22cf04abd6013596672a275dd612fb,PodSandboxId:ea2ba9c2c531e3cc2962958848da4fa34ae7b66903602bb093b23627926f2975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761729701411871970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0982e617a8fee3e09b26bd173358c235,},Annotations:map[string]string{io
.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22c10d8c5a7c3ead98af69a3d786354abb8cc82b5f1423a8a935875cb39bad2,PodSandboxId:e1bb2f22e314cfaf737e8e8cec7e11b39452f0f4072c6a2eab483e70a145a217,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761729701380317870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893324,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dab438294c3ac2591ee4d5e4a4a1192,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d20b1c7c467c25702547b329e476cf01e6d70b701649662b039db08d0b32f6,PodSandboxId:ffeec05e47c6b08459815516011445be572be3168427237065383eb1f66d3959,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761729701370019813,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04b0b35390cedc3ee1faf204552e81c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:100cbc4316bc7d501b6b10f4a8006b0ef5aa5632a8126bc2413b7eb1f321cf64,PodSandboxId:e83f0a8c8fff8d09155a6a7a94b55e6fb1538a91049d60c42d89013787100ef5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Sta
te:CONTAINER_RUNNING,CreatedAt:1761729693164137072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpg4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8132f7-9f54-4f52-955c-530a0a9bac9f,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:424dd140b4a3e95d08671486ecc075a0dcd83954d6f4de1fad7aad1dd5b6abd2,PodSandboxId:b5b54570218d7a3f634e959852e14832243bf51e443ab1c8673213453001f405,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17617
29691594233791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mbnml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55421df5-01e8-405b-9e7e-3043102dd93a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec8e0520a183227d620ba7f06e6fcbf656a495bd062f664b388192571a6ac865,PodSandboxId:e83f0a8c8fff8d09155a6a7a94b55e6fb1538a91049d60
c42d89013787100ef5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761729670189752000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpg4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8132f7-9f54-4f52-955c-530a0a9bac9f,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd2754e2b05f6e7010d6e581df54d3b15960c3fd5f83d643680c89d03910321,PodSandboxId:b5b54570218d7a3f634e959852e14832243bf51e443ab1c8673213453001f405,Metadata:&Conta
inerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761729671049675928,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mbnml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55421df5-01e8-405b-9e7e-3043102dd93a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e396e3fde16e099c7d87988760f795ac52fc48097e29b7b15c95f311da21681a,PodSandboxId:ffeec05e47c6b08459815516011445be572be3168427237065383eb1f66d3959,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761729670091242140,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04b0b35390cedc3ee1faf204552e81c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0dd75a034ccc17affda4545c12744727c0bc851b283773181a99feaafa121e3,PodSandboxId:e1bb2f22e314cfaf737e8e8cec7e11b39452f0f4072c6a2eab483e70a145a217,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761729670061595063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dab438294c3ac2591ee4d5e4a4a1192,},Annotations:map[string]string{io
.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978058fbf1426a69d6043d86933e5e3e34440b82ba8161506c60e7c7270ab8cb,PodSandboxId:8999e69316f7a7633b1e1c160264cc860eb9d99bb0db8f0635abf7dba3f05339,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761729670081445852,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893324,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 38cffe6a3081c1b78af5ea0d943026dd,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:356b9115644ec8e980fd5232f9fd2fae910e8634d4dbbe358a4c53e4c81c9e6e,PodSandboxId:ea2ba9c2c531e3cc2962958848da4fa34ae7b66903602bb093b23627926f2975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761729669934974026,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0982e617a8fee3e09b26bd173358c235,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a97f9707-9aec-43b2-863a-a3d6bdfb4408 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.333373304Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d58cbe2-56e2-4676-91e3-067b38e1944b name=/runtime.v1.RuntimeService/Version
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.334101480Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d58cbe2-56e2-4676-91e3-067b38e1944b name=/runtime.v1.RuntimeService/Version
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.336047685Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b4e2716-e2e7-4f3b-b749-4b074d4343c6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.336445655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761729724336420089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b4e2716-e2e7-4f3b-b749-4b074d4343c6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.339094374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4fa7367-1843-442c-978c-34b21429ee42 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.339231721Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4fa7367-1843-442c-978c-34b21429ee42 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.339551588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a7c24ada82354252cb66a59fee6906bea1753d5ccda6ac63d1ad202ca3ba6a0,PodSandboxId:8999e69316f7a7633b1e1c160264cc860eb9d99bb0db8f0635abf7dba3f05339,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761729701404973668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cffe6a3081c1b78af5ea0d943026dd,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a02d15b16ad3c67a3633ac6435beb5ac8c22cf04abd6013596672a275dd612fb,PodSandboxId:ea2ba9c2c531e3cc2962958848da4fa34ae7b66903602bb093b23627926f2975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761729701411871970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0982e617a8fee3e09b26bd173358c235,},Annotations:map[string]string{io
.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22c10d8c5a7c3ead98af69a3d786354abb8cc82b5f1423a8a935875cb39bad2,PodSandboxId:e1bb2f22e314cfaf737e8e8cec7e11b39452f0f4072c6a2eab483e70a145a217,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761729701380317870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893324,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dab438294c3ac2591ee4d5e4a4a1192,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d20b1c7c467c25702547b329e476cf01e6d70b701649662b039db08d0b32f6,PodSandboxId:ffeec05e47c6b08459815516011445be572be3168427237065383eb1f66d3959,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761729701370019813,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04b0b35390cedc3ee1faf204552e81c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:100cbc4316bc7d501b6b10f4a8006b0ef5aa5632a8126bc2413b7eb1f321cf64,PodSandboxId:e83f0a8c8fff8d09155a6a7a94b55e6fb1538a91049d60c42d89013787100ef5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Sta
te:CONTAINER_RUNNING,CreatedAt:1761729693164137072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpg4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8132f7-9f54-4f52-955c-530a0a9bac9f,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:424dd140b4a3e95d08671486ecc075a0dcd83954d6f4de1fad7aad1dd5b6abd2,PodSandboxId:b5b54570218d7a3f634e959852e14832243bf51e443ab1c8673213453001f405,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17617
29691594233791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mbnml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55421df5-01e8-405b-9e7e-3043102dd93a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec8e0520a183227d620ba7f06e6fcbf656a495bd062f664b388192571a6ac865,PodSandboxId:e83f0a8c8fff8d09155a6a7a94b55e6fb1538a91049d60
c42d89013787100ef5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761729670189752000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpg4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8132f7-9f54-4f52-955c-530a0a9bac9f,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd2754e2b05f6e7010d6e581df54d3b15960c3fd5f83d643680c89d03910321,PodSandboxId:b5b54570218d7a3f634e959852e14832243bf51e443ab1c8673213453001f405,Metadata:&Conta
inerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761729671049675928,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mbnml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55421df5-01e8-405b-9e7e-3043102dd93a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e396e3fde16e099c7d87988760f795ac52fc48097e29b7b15c95f311da21681a,PodSandboxId:ffeec05e47c6b08459815516011445be572be3168427237065383eb1f66d3959,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761729670091242140,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04b0b35390cedc3ee1faf204552e81c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0dd75a034ccc17affda4545c12744727c0bc851b283773181a99feaafa121e3,PodSandboxId:e1bb2f22e314cfaf737e8e8cec7e11b39452f0f4072c6a2eab483e70a145a217,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761729670061595063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dab438294c3ac2591ee4d5e4a4a1192,},Annotations:map[string]string{io
.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978058fbf1426a69d6043d86933e5e3e34440b82ba8161506c60e7c7270ab8cb,PodSandboxId:8999e69316f7a7633b1e1c160264cc860eb9d99bb0db8f0635abf7dba3f05339,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761729670081445852,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893324,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 38cffe6a3081c1b78af5ea0d943026dd,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:356b9115644ec8e980fd5232f9fd2fae910e8634d4dbbe358a4c53e4c81c9e6e,PodSandboxId:ea2ba9c2c531e3cc2962958848da4fa34ae7b66903602bb093b23627926f2975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761729669934974026,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0982e617a8fee3e09b26bd173358c235,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4fa7367-1843-442c-978c-34b21429ee42 name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.399178974Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6352612-6c06-4c84-970d-3dd476659840 name=/runtime.v1.RuntimeService/Version
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.399414052Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6352612-6c06-4c84-970d-3dd476659840 name=/runtime.v1.RuntimeService/Version
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.400654322Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6de8ec81-c34b-4fcb-8b35-7fb367026d9f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.401210977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761729724401187389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6de8ec81-c34b-4fcb-8b35-7fb367026d9f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.401682863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c14d148-5cf4-4a66-ac4a-fc4aa85f4d8d name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.401852797Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c14d148-5cf4-4a66-ac4a-fc4aa85f4d8d name=/runtime.v1.RuntimeService/ListContainers
	Oct 29 09:22:04 pause-893324 crio[2537]: time="2025-10-29 09:22:04.402212067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a7c24ada82354252cb66a59fee6906bea1753d5ccda6ac63d1ad202ca3ba6a0,PodSandboxId:8999e69316f7a7633b1e1c160264cc860eb9d99bb0db8f0635abf7dba3f05339,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761729701404973668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cffe6a3081c1b78af5ea0d943026dd,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a02d15b16ad3c67a3633ac6435beb5ac8c22cf04abd6013596672a275dd612fb,PodSandboxId:ea2ba9c2c531e3cc2962958848da4fa34ae7b66903602bb093b23627926f2975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761729701411871970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0982e617a8fee3e09b26bd173358c235,},Annotations:map[string]string{io
.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22c10d8c5a7c3ead98af69a3d786354abb8cc82b5f1423a8a935875cb39bad2,PodSandboxId:e1bb2f22e314cfaf737e8e8cec7e11b39452f0f4072c6a2eab483e70a145a217,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761729701380317870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893324,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dab438294c3ac2591ee4d5e4a4a1192,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d20b1c7c467c25702547b329e476cf01e6d70b701649662b039db08d0b32f6,PodSandboxId:ffeec05e47c6b08459815516011445be572be3168427237065383eb1f66d3959,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761729701370019813,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04b0b35390cedc3ee1faf204552e81c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:100cbc4316bc7d501b6b10f4a8006b0ef5aa5632a8126bc2413b7eb1f321cf64,PodSandboxId:e83f0a8c8fff8d09155a6a7a94b55e6fb1538a91049d60c42d89013787100ef5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Sta
te:CONTAINER_RUNNING,CreatedAt:1761729693164137072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpg4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8132f7-9f54-4f52-955c-530a0a9bac9f,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:424dd140b4a3e95d08671486ecc075a0dcd83954d6f4de1fad7aad1dd5b6abd2,PodSandboxId:b5b54570218d7a3f634e959852e14832243bf51e443ab1c8673213453001f405,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17617
29691594233791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mbnml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55421df5-01e8-405b-9e7e-3043102dd93a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec8e0520a183227d620ba7f06e6fcbf656a495bd062f664b388192571a6ac865,PodSandboxId:e83f0a8c8fff8d09155a6a7a94b55e6fb1538a91049d60
c42d89013787100ef5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761729670189752000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpg4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8132f7-9f54-4f52-955c-530a0a9bac9f,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd2754e2b05f6e7010d6e581df54d3b15960c3fd5f83d643680c89d03910321,PodSandboxId:b5b54570218d7a3f634e959852e14832243bf51e443ab1c8673213453001f405,Metadata:&Conta
inerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761729671049675928,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mbnml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55421df5-01e8-405b-9e7e-3043102dd93a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e396e3fde16e099c7d87988760f795ac52fc48097e29b7b15c95f311da21681a,PodSandboxId:ffeec05e47c6b08459815516011445be572be3168427237065383eb1f66d3959,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761729670091242140,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c04b0b35390cedc3ee1faf204552e81c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0dd75a034ccc17affda4545c12744727c0bc851b283773181a99feaafa121e3,PodSandboxId:e1bb2f22e314cfaf737e8e8cec7e11b39452f0f4072c6a2eab483e70a145a217,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761729670061595063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dab438294c3ac2591ee4d5e4a4a1192,},Annotations:map[string]string{io
.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978058fbf1426a69d6043d86933e5e3e34440b82ba8161506c60e7c7270ab8cb,PodSandboxId:8999e69316f7a7633b1e1c160264cc860eb9d99bb0db8f0635abf7dba3f05339,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761729670081445852,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893324,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 38cffe6a3081c1b78af5ea0d943026dd,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:356b9115644ec8e980fd5232f9fd2fae910e8634d4dbbe358a4c53e4c81c9e6e,PodSandboxId:ea2ba9c2c531e3cc2962958848da4fa34ae7b66903602bb093b23627926f2975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761729669934974026,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0982e617a8fee3e09b26bd173358c235,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c14d148-5cf4-4a66-ac4a-fc4aa85f4d8d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a02d15b16ad3c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   23 seconds ago      Running             kube-apiserver            2                   ea2ba9c2c531e       kube-apiserver-pause-893324
	0a7c24ada8235       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   23 seconds ago      Running             kube-scheduler            2                   8999e69316f7a       kube-scheduler-pause-893324
	c22c10d8c5a7c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   23 seconds ago      Running             kube-controller-manager   2                   e1bb2f22e314c       kube-controller-manager-pause-893324
	b8d20b1c7c467       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   23 seconds ago      Running             etcd                      2                   ffeec05e47c6b       etcd-pause-893324
	100cbc4316bc7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   31 seconds ago      Running             kube-proxy                2                   e83f0a8c8fff8       kube-proxy-dpg4s
	424dd140b4a3e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   32 seconds ago      Running             coredns                   2                   b5b54570218d7       coredns-66bc5c9577-mbnml
	fcd2754e2b05f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   53 seconds ago      Exited              coredns                   1                   b5b54570218d7       coredns-66bc5c9577-mbnml
	ec8e0520a1832       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   54 seconds ago      Exited              kube-proxy                1                   e83f0a8c8fff8       kube-proxy-dpg4s
	e396e3fde16e0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   54 seconds ago      Exited              etcd                      1                   ffeec05e47c6b       etcd-pause-893324
	978058fbf1426       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   54 seconds ago      Exited              kube-scheduler            1                   8999e69316f7a       kube-scheduler-pause-893324
	a0dd75a034ccc       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   54 seconds ago      Exited              kube-controller-manager   1                   e1bb2f22e314c       kube-controller-manager-pause-893324
	356b9115644ec       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   54 seconds ago      Exited              kube-apiserver            1                   ea2ba9c2c531e       kube-apiserver-pause-893324
	
	
	==> coredns [424dd140b4a3e95d08671486ecc075a0dcd83954d6f4de1fad7aad1dd5b6abd2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49388 - 6788 "HINFO IN 2590175597073062928.3472498003392159919. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021884809s
	
	
	==> coredns [fcd2754e2b05f6e7010d6e581df54d3b15960c3fd5f83d643680c89d03910321] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:56962 - 27438 "HINFO IN 8068587486760032718.2574323826684984277. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029882881s
	
	
	==> describe nodes <==
	Name:               pause-893324
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-893324
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e75a8b4e62513235c9783e62910f4ea4821b9aac
	                    minikube.k8s.io/name=pause-893324
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_29T09_20_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Oct 2025 09:20:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-893324
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Oct 2025 09:21:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Oct 2025 09:21:44 +0000   Wed, 29 Oct 2025 09:20:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Oct 2025 09:21:44 +0000   Wed, 29 Oct 2025 09:20:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Oct 2025 09:21:44 +0000   Wed, 29 Oct 2025 09:20:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Oct 2025 09:21:44 +0000   Wed, 29 Oct 2025 09:20:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.89
	  Hostname:    pause-893324
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c020e8136d9402cb3596e31a6beaac2
	  System UUID:                1c020e81-36d9-402c-b359-6e31a6beaac2
	  Boot ID:                    47b66fc5-ce1b-46d3-a362-df92c600f27a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-mbnml                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     102s
	  kube-system                 etcd-pause-893324                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         107s
	  kube-system                 kube-apiserver-pause-893324             250m (12%)    0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-pause-893324    200m (10%)    0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-dpg4s                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-pause-893324             100m (5%)     0 (0%)      0 (0%)           0 (0%)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  Starting                 30s                  kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientPID     114s (x7 over 114s)  kubelet          Node pause-893324 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node pause-893324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node pause-893324 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  114s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  107s                 kubelet          Node pause-893324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s                 kubelet          Node pause-893324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s                 kubelet          Node pause-893324 status is now: NodeHasSufficientPID
	  Normal  NodeReady                106s                 kubelet          Node pause-893324 status is now: NodeReady
	  Normal  RegisteredNode           104s                 node-controller  Node pause-893324 event: Registered Node pause-893324 in Controller
	  Normal  RegisteredNode           47s                  node-controller  Node pause-893324 event: Registered Node pause-893324 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node pause-893324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node pause-893324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node pause-893324 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                  node-controller  Node pause-893324 event: Registered Node pause-893324 in Controller
	
	
	==> dmesg <==
	[Oct29 09:19] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001545] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002946] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.156012] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.091333] kauditd_printk_skb: 1 callbacks suppressed
	[Oct29 09:20] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.152942] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.146280] kauditd_printk_skb: 18 callbacks suppressed
	[ +33.080970] kauditd_printk_skb: 190 callbacks suppressed
	[Oct29 09:21] kauditd_printk_skb: 297 callbacks suppressed
	[  +3.425668] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.375790] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.119369] kauditd_printk_skb: 29 callbacks suppressed
	[  +1.735244] kauditd_printk_skb: 79 callbacks suppressed
	
	
	==> etcd [b8d20b1c7c467c25702547b329e476cf01e6d70b701649662b039db08d0b32f6] <==
	{"level":"warn","ts":"2025-10-29T09:21:43.230840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.255375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.265663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.295125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.306220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.319978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.330500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.347772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.354960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.369420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.383031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.400582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.410653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.421015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.441739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.451694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.455069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.470941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.481543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.501623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.507081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.514408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.528386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.539473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:43.638017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46364","server-name":"","error":"EOF"}
	
	
	==> etcd [e396e3fde16e099c7d87988760f795ac52fc48097e29b7b15c95f311da21681a] <==
	{"level":"warn","ts":"2025-10-29T09:21:13.555722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:13.592600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:13.641159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:13.654147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:13.668046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:13.685362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-29T09:21:13.765156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56532","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-29T09:21:21.861314Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-29T09:21:21.861467Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-893324","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.89:2380"],"advertise-client-urls":["https://192.168.50.89:2379"]}
	{"level":"error","ts":"2025-10-29T09:21:21.861588Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-29T09:21:21.861771Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-29T09:21:28.864697Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T09:21:28.864803Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e6d0eb5dc2e1f76f","current-leader-member-id":"e6d0eb5dc2e1f76f"}
	{"level":"info","ts":"2025-10-29T09:21:28.864962Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-29T09:21:28.864981Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-29T09:21:28.866398Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.89:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-29T09:21:28.866584Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.89:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-29T09:21:28.866617Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.89:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-29T09:21:28.866962Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-29T09:21:28.870176Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-29T09:21:28.870690Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T09:21:28.873286Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.89:2380"}
	{"level":"error","ts":"2025-10-29T09:21:28.873383Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.89:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-29T09:21:28.873470Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.89:2380"}
	{"level":"info","ts":"2025-10-29T09:21:28.873502Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-893324","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.89:2380"],"advertise-client-urls":["https://192.168.50.89:2379"]}
	
	
	==> kernel <==
	 09:22:04 up 2 min,  0 users,  load average: 0.81, 0.36, 0.14
	Linux pause-893324 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [356b9115644ec8e980fd5232f9fd2fae910e8634d4dbbe358a4c53e4c81c9e6e] <==
	W1029 09:21:37.768200       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:37.848544       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-10-29T09:21:37.858633Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f10000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	W1029 09:21:37.874979       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:37.881966       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:37.890985       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-10-29T09:21:37.923127Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000273c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	W1029 09:21:37.929045       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:37.956012       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:37.975084       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.132719       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.160498       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.180582       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.211775       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.313203       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.325831       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.397284       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.479871       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.516185       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.549703       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.585001       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.671078       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.742970       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.744538       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1029 09:21:38.987664       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a02d15b16ad3c67a3633ac6435beb5ac8c22cf04abd6013596672a275dd612fb] <==
	I1029 09:21:44.506484       1 cache.go:39] Caches are synced for autoregister controller
	I1029 09:21:44.515074       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1029 09:21:44.536367       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1029 09:21:44.536425       1 policy_source.go:240] refreshing policies
	I1029 09:21:44.536773       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1029 09:21:44.545572       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1029 09:21:44.563952       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1029 09:21:44.565454       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1029 09:21:44.567246       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1029 09:21:44.570426       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1029 09:21:44.570458       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1029 09:21:44.571945       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1029 09:21:44.572107       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1029 09:21:44.572198       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1029 09:21:44.596345       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1029 09:21:44.596550       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1029 09:21:44.978530       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1029 09:21:45.270167       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1029 09:21:46.158767       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1029 09:21:46.208799       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1029 09:21:46.242276       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1029 09:21:46.250167       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1029 09:21:48.027150       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1029 09:21:48.124861       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1029 09:21:48.224022       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a0dd75a034ccc17affda4545c12744727c0bc851b283773181a99feaafa121e3] <==
	I1029 09:21:17.947198       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1029 09:21:17.948383       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1029 09:21:17.949505       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1029 09:21:17.952632       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1029 09:21:17.954973       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:21:17.956164       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:21:17.957422       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1029 09:21:17.976553       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1029 09:21:17.977234       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:21:17.977267       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:21:17.977278       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1029 09:21:17.977393       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:21:17.982964       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1029 09:21:17.983057       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1029 09:21:17.983067       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 09:21:17.983079       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:21:17.983316       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1029 09:21:17.984240       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1029 09:21:17.984351       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-893324"
	I1029 09:21:17.984439       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1029 09:21:17.983350       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 09:21:17.983326       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:21:17.983335       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1029 09:21:17.983342       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:21:17.987930       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [c22c10d8c5a7c3ead98af69a3d786354abb8cc82b5f1423a8a935875cb39bad2] <==
	I1029 09:21:47.816228       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1029 09:21:47.818695       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1029 09:21:47.820322       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1029 09:21:47.820443       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1029 09:21:47.821439       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1029 09:21:47.821512       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1029 09:21:47.821564       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1029 09:21:47.821570       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1029 09:21:47.821768       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1029 09:21:47.823967       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1029 09:21:47.826350       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:21:47.830951       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1029 09:21:47.839291       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1029 09:21:47.845612       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1029 09:21:47.854316       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:21:47.860645       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1029 09:21:47.864326       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1029 09:21:47.865359       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1029 09:21:47.868798       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1029 09:21:47.870437       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1029 09:21:47.870485       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1029 09:21:47.870558       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1029 09:21:47.870751       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1029 09:21:47.870876       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1029 09:21:47.870912       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [100cbc4316bc7d501b6b10f4a8006b0ef5aa5632a8126bc2413b7eb1f321cf64] <==
	I1029 09:21:33.920617       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:21:33.920645       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.89"]
	E1029 09:21:33.920733       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:21:33.953569       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1029 09:21:33.953685       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1029 09:21:33.953745       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:21:33.963158       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:21:33.963433       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:21:33.963578       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:21:33.968295       1 config.go:200] "Starting service config controller"
	I1029 09:21:33.968307       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:21:33.968323       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:21:33.968327       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:21:33.969132       1 config.go:309] "Starting node config controller"
	I1029 09:21:33.969154       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:21:33.969160       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:21:33.969621       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:21:33.969629       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:21:34.069089       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 09:21:34.069159       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:21:34.069736       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	E1029 09:21:39.263127       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": unexpected EOF"
	
	
	==> kube-proxy [ec8e0520a183227d620ba7f06e6fcbf656a495bd062f664b388192571a6ac865] <==
	I1029 09:21:12.909508       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1029 09:21:14.611434       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1029 09:21:14.612142       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.89"]
	E1029 09:21:14.616158       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1029 09:21:14.671212       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1029 09:21:14.671275       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1029 09:21:14.671297       1 server_linux.go:132] "Using iptables Proxier"
	I1029 09:21:14.682660       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1029 09:21:14.683067       1 server.go:527] "Version info" version="v1.34.1"
	I1029 09:21:14.683107       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:21:14.687793       1 config.go:200] "Starting service config controller"
	I1029 09:21:14.687854       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1029 09:21:14.687873       1 config.go:106] "Starting endpoint slice config controller"
	I1029 09:21:14.687877       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1029 09:21:14.688056       1 config.go:403] "Starting serviceCIDR config controller"
	I1029 09:21:14.688081       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1029 09:21:14.688860       1 config.go:309] "Starting node config controller"
	I1029 09:21:14.688957       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1029 09:21:14.688979       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1029 09:21:14.788009       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1029 09:21:14.788058       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1029 09:21:14.788613       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0a7c24ada82354252cb66a59fee6906bea1753d5ccda6ac63d1ad202ca3ba6a0] <==
	I1029 09:21:43.204733       1 serving.go:386] Generated self-signed cert in-memory
	I1029 09:21:45.028120       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:21:45.028167       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:21:45.038965       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1029 09:21:45.039028       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1029 09:21:45.041185       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:21:45.041270       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:21:45.042302       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:21:45.042333       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:21:45.042957       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:21:45.043300       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:21:45.139859       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1029 09:21:45.143503       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:21:45.143605       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [978058fbf1426a69d6043d86933e5e3e34440b82ba8161506c60e7c7270ab8cb] <==
	I1029 09:21:13.557540       1 serving.go:386] Generated self-signed cert in-memory
	I1029 09:21:14.613023       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1029 09:21:14.613110       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1029 09:21:14.618667       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1029 09:21:14.618779       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1029 09:21:14.618801       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1029 09:21:14.618821       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1029 09:21:14.620977       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:21:14.621007       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:21:14.621039       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:21:14.621045       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:21:14.719357       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1029 09:21:14.722139       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:21:14.722372       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:21:28.939403       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1029 09:21:28.939445       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1029 09:21:28.939469       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1029 09:21:28.939491       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1029 09:21:28.939518       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1029 09:21:28.939542       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1029 09:21:28.939818       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1029 09:21:28.940702       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 29 09:21:44 pause-893324 kubelet[3863]: E1029 09:21:44.067529    3863 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-893324\" not found" node="pause-893324"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: E1029 09:21:44.069160    3863 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-893324\" not found" node="pause-893324"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: E1029 09:21:44.070055    3863 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-893324\" not found" node="pause-893324"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: I1029 09:21:44.420067    3863 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-893324"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: E1029 09:21:44.561582    3863 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-893324\" already exists" pod="kube-system/etcd-pause-893324"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: I1029 09:21:44.561627    3863 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-893324"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: E1029 09:21:44.576698    3863 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-893324\" already exists" pod="kube-system/kube-apiserver-pause-893324"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: I1029 09:21:44.576740    3863 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-893324"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: E1029 09:21:44.597602    3863 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-893324\" already exists" pod="kube-system/kube-controller-manager-pause-893324"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: I1029 09:21:44.597643    3863 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-893324"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: E1029 09:21:44.607362    3863 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-893324\" already exists" pod="kube-system/kube-scheduler-pause-893324"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: I1029 09:21:44.627857    3863 kubelet_node_status.go:124] "Node was previously registered" node="pause-893324"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: I1029 09:21:44.627971    3863 kubelet_node_status.go:78] "Successfully registered node" node="pause-893324"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: I1029 09:21:44.628011    3863 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: I1029 09:21:44.630039    3863 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: I1029 09:21:44.898308    3863 apiserver.go:52] "Watching apiserver"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: I1029 09:21:44.926991    3863 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: I1029 09:21:44.967073    3863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f8132f7-9f54-4f52-955c-530a0a9bac9f-xtables-lock\") pod \"kube-proxy-dpg4s\" (UID: \"5f8132f7-9f54-4f52-955c-530a0a9bac9f\") " pod="kube-system/kube-proxy-dpg4s"
	Oct 29 09:21:44 pause-893324 kubelet[3863]: I1029 09:21:44.967698    3863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f8132f7-9f54-4f52-955c-530a0a9bac9f-lib-modules\") pod \"kube-proxy-dpg4s\" (UID: \"5f8132f7-9f54-4f52-955c-530a0a9bac9f\") " pod="kube-system/kube-proxy-dpg4s"
	Oct 29 09:21:45 pause-893324 kubelet[3863]: I1029 09:21:45.074814    3863 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-893324"
	Oct 29 09:21:45 pause-893324 kubelet[3863]: E1029 09:21:45.098155    3863 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-893324\" already exists" pod="kube-system/etcd-pause-893324"
	Oct 29 09:21:51 pause-893324 kubelet[3863]: E1029 09:21:51.033531    3863 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761729711032331424  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 29 09:21:51 pause-893324 kubelet[3863]: E1029 09:21:51.033552    3863 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761729711032331424  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 29 09:22:01 pause-893324 kubelet[3863]: E1029 09:22:01.035798    3863 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761729721035303152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 29 09:22:01 pause-893324 kubelet[3863]: E1029 09:22:01.035827    3863 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761729721035303152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-893324 -n pause-893324
helpers_test.go:269: (dbg) Run:  kubectl --context pause-893324 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (66.02s)

                                                
                                    

Test pass (300/343)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 32.62
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 14.01
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.18
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 1.6
22 TestOffline 96.35
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 137
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 12.5
35 TestAddons/parallel/Registry 20.12
36 TestAddons/parallel/RegistryCreds 0.64
38 TestAddons/parallel/InspektorGadget 5.31
39 TestAddons/parallel/MetricsServer 6.8
41 TestAddons/parallel/CSI 50.86
42 TestAddons/parallel/Headlamp 21.98
43 TestAddons/parallel/CloudSpanner 6.61
44 TestAddons/parallel/LocalPath 59.33
45 TestAddons/parallel/NvidiaDevicePlugin 7
46 TestAddons/parallel/Yakd 10.92
48 TestAddons/StoppedEnableDisable 90.31
49 TestCertOptions 52.19
50 TestCertExpiration 280.69
52 TestForceSystemdFlag 82.14
53 TestForceSystemdEnv 37.47
58 TestErrorSpam/setup 38.22
59 TestErrorSpam/start 0.33
60 TestErrorSpam/status 0.64
61 TestErrorSpam/pause 1.46
62 TestErrorSpam/unpause 1.69
63 TestErrorSpam/stop 89.84
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 83.37
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 39.15
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.09
75 TestFunctional/serial/CacheCmd/cache/add_local 2.23
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.5
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 40.53
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.39
86 TestFunctional/serial/LogsFileCmd 1.38
87 TestFunctional/serial/InvalidService 4.43
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 15.29
91 TestFunctional/parallel/DryRun 0.25
92 TestFunctional/parallel/InternationalLanguage 0.12
93 TestFunctional/parallel/StatusCmd 0.72
97 TestFunctional/parallel/ServiceCmdConnect 12.46
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 40.17
101 TestFunctional/parallel/SSHCmd 0.34
102 TestFunctional/parallel/CpCmd 1.11
103 TestFunctional/parallel/MySQL 26.96
104 TestFunctional/parallel/FileSync 0.17
105 TestFunctional/parallel/CertSync 1.2
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
113 TestFunctional/parallel/License 0.5
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.19
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.4
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
121 TestFunctional/parallel/ImageCommands/ImageBuild 11.1
122 TestFunctional/parallel/ImageCommands/Setup 1.93
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
124 TestFunctional/parallel/MountCmd/any-port 8.95
125 TestFunctional/parallel/ProfileCmd/profile_list 0.31
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.52
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.97
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.73
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.65
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
137 TestFunctional/parallel/MountCmd/specific-port 1.36
138 TestFunctional/parallel/ServiceCmd/List 0.45
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.42
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.23
150 TestFunctional/parallel/ServiceCmd/Format 0.26
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.13
152 TestFunctional/parallel/ServiceCmd/URL 0.24
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 236.38
161 TestMultiControlPlane/serial/DeployApp 7.62
162 TestMultiControlPlane/serial/PingHostFromPods 1.26
163 TestMultiControlPlane/serial/AddWorkerNode 46.46
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.68
166 TestMultiControlPlane/serial/CopyFile 10.61
167 TestMultiControlPlane/serial/StopSecondaryNode 69.9
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.51
169 TestMultiControlPlane/serial/RestartSecondaryNode 34.21
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.79
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 358.37
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.41
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
174 TestMultiControlPlane/serial/StopCluster 250.53
175 TestMultiControlPlane/serial/RestartCluster 87.15
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.49
177 TestMultiControlPlane/serial/AddSecondaryNode 72.69
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.65
183 TestJSONOutput/start/Command 77.09
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.71
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.61
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.79
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 73.21
215 TestMountStart/serial/StartWithMountFirst 20.87
216 TestMountStart/serial/VerifyMountFirst 0.31
217 TestMountStart/serial/StartWithMountSecond 20.65
218 TestMountStart/serial/VerifyMountSecond 0.3
219 TestMountStart/serial/DeleteFirst 0.69
220 TestMountStart/serial/VerifyMountPostDelete 0.3
221 TestMountStart/serial/Stop 1.21
222 TestMountStart/serial/RestartStopped 18.42
223 TestMountStart/serial/VerifyMountPostStop 0.31
226 TestMultiNode/serial/FreshStart2Nodes 94.49
227 TestMultiNode/serial/DeployApp2Nodes 6.33
228 TestMultiNode/serial/PingHostFrom2Pods 0.88
229 TestMultiNode/serial/AddNode 45.39
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.45
232 TestMultiNode/serial/CopyFile 5.96
233 TestMultiNode/serial/StopNode 2.39
234 TestMultiNode/serial/StartAfterStop 39.63
235 TestMultiNode/serial/RestartKeepsNodes 284.66
236 TestMultiNode/serial/DeleteNode 2.59
237 TestMultiNode/serial/StopMultiNode 154
238 TestMultiNode/serial/RestartMultiNode 112.84
239 TestMultiNode/serial/ValidateNameConflict 37.88
246 TestScheduledStopUnix 107.43
250 TestRunningBinaryUpgrade 123.72
252 TestKubernetesUpgrade 210.39
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/StartWithK8s 98.94
264 TestNetworkPlugins/group/false 6.19
268 TestISOImage/Setup 47.65
269 TestNoKubernetes/serial/StartWithStopK8s 44.39
271 TestISOImage/Binaries/crictl 0.24
272 TestISOImage/Binaries/curl 0.18
273 TestISOImage/Binaries/docker 0.19
274 TestISOImage/Binaries/git 0.18
275 TestISOImage/Binaries/iptables 0.2
276 TestISOImage/Binaries/podman 0.19
277 TestISOImage/Binaries/rsync 0.19
278 TestISOImage/Binaries/socat 0.21
279 TestISOImage/Binaries/wget 0.21
280 TestISOImage/Binaries/VBoxControl 0.19
281 TestISOImage/Binaries/VBoxService 0.21
282 TestNoKubernetes/serial/Start 50.56
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
284 TestNoKubernetes/serial/ProfileList 16.1
285 TestNoKubernetes/serial/Stop 1.33
286 TestNoKubernetes/serial/StartNoArgs 21.35
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
288 TestStoppedBinaryUpgrade/Setup 3.28
297 TestPause/serial/Start 79.76
298 TestStoppedBinaryUpgrade/Upgrade 120.59
300 TestNetworkPlugins/group/auto/Start 94.55
301 TestStoppedBinaryUpgrade/MinikubeLogs 1.37
302 TestNetworkPlugins/group/kindnet/Start 67.99
303 TestNetworkPlugins/group/calico/Start 84.92
304 TestNetworkPlugins/group/custom-flannel/Start 100.91
305 TestNetworkPlugins/group/kindnet/ControllerPod 6.03
306 TestNetworkPlugins/group/auto/KubeletFlags 0.18
307 TestNetworkPlugins/group/auto/NetCatPod 11.27
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
309 TestNetworkPlugins/group/kindnet/NetCatPod 12.66
310 TestNetworkPlugins/group/auto/DNS 0.2
311 TestNetworkPlugins/group/auto/Localhost 0.13
312 TestNetworkPlugins/group/auto/HairPin 0.16
313 TestNetworkPlugins/group/kindnet/DNS 0.19
314 TestNetworkPlugins/group/kindnet/Localhost 0.16
315 TestNetworkPlugins/group/kindnet/HairPin 0.15
316 TestNetworkPlugins/group/enable-default-cni/Start 57.94
317 TestNetworkPlugins/group/calico/ControllerPod 6.01
318 TestNetworkPlugins/group/flannel/Start 93.6
319 TestNetworkPlugins/group/calico/KubeletFlags 0.22
320 TestNetworkPlugins/group/calico/NetCatPod 10.35
321 TestNetworkPlugins/group/calico/DNS 0.18
322 TestNetworkPlugins/group/calico/Localhost 0.17
323 TestNetworkPlugins/group/calico/HairPin 0.16
324 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.19
325 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.28
326 TestNetworkPlugins/group/custom-flannel/DNS 0.19
327 TestNetworkPlugins/group/bridge/Start 94.16
328 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
329 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
333 TestStartStop/group/old-k8s-version/serial/FirstStart 62.3
334 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
335 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
336 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
338 TestStartStop/group/no-preload/serial/FirstStart 104.05
339 TestNetworkPlugins/group/flannel/ControllerPod 6.01
340 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
341 TestNetworkPlugins/group/flannel/NetCatPod 12.24
342 TestNetworkPlugins/group/flannel/DNS 0.2
343 TestNetworkPlugins/group/flannel/Localhost 0.17
344 TestNetworkPlugins/group/flannel/HairPin 0.23
345 TestStartStop/group/old-k8s-version/serial/DeployApp 11.36
346 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.35
347 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
348 TestNetworkPlugins/group/bridge/NetCatPod 11.32
349 TestStartStop/group/old-k8s-version/serial/Stop 87.32
351 TestStartStop/group/embed-certs/serial/FirstStart 86.67
352 TestNetworkPlugins/group/bridge/DNS 0.17
353 TestNetworkPlugins/group/bridge/Localhost 0.17
354 TestNetworkPlugins/group/bridge/HairPin 0.18
356 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.98
357 TestStartStop/group/no-preload/serial/DeployApp 11.31
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.99
359 TestStartStop/group/no-preload/serial/Stop 81.38
360 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
361 TestStartStop/group/old-k8s-version/serial/SecondStart 42.38
362 TestStartStop/group/embed-certs/serial/DeployApp 10.3
363 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
364 TestStartStop/group/embed-certs/serial/Stop 76.69
365 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.29
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
367 TestStartStop/group/default-k8s-diff-port/serial/Stop 90.68
368 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 17.01
369 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
370 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
371 TestStartStop/group/no-preload/serial/SecondStart 54.9
372 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
373 TestStartStop/group/old-k8s-version/serial/Pause 2.47
375 TestStartStop/group/newest-cni/serial/FirstStart 54.78
376 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
377 TestStartStop/group/embed-certs/serial/SecondStart 56.48
378 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
379 TestStartStop/group/newest-cni/serial/DeployApp 0
380 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.02
381 TestStartStop/group/newest-cni/serial/Stop 72.54
382 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
383 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.31
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
386 TestStartStop/group/no-preload/serial/Pause 2.67
388 TestISOImage/PersistentMounts//data 0.21
389 TestISOImage/PersistentMounts//var/lib/docker 0.19
390 TestISOImage/PersistentMounts//var/lib/cni 0.21
391 TestISOImage/PersistentMounts//var/lib/kubelet 0.21
392 TestISOImage/PersistentMounts//var/lib/minikube 0.2
393 TestISOImage/PersistentMounts//var/lib/toolbox 0.2
394 TestISOImage/PersistentMounts//var/lib/boot2docker 0.18
395 TestISOImage/eBPFSupport 0.18
396 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.01
397 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
398 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.2
399 TestStartStop/group/embed-certs/serial/Pause 2.49
400 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
401 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
402 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
403 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.37
404 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
405 TestStartStop/group/newest-cni/serial/SecondStart 32.15
406 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
408 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
409 TestStartStop/group/newest-cni/serial/Pause 2.36
x
+
TestDownloadOnly/v1.28.0/json-events (32.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-643314 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-643314 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (32.618139645s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (32.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1029 08:21:11.462168  141231 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1029 08:21:11.462265  141231 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-137232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-643314
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-643314: exit status 85 (77.01409ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-643314 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-643314 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:20:38
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:20:38.897198  141243 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:20:38.897463  141243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:38.897472  141243 out.go:374] Setting ErrFile to fd 2...
	I1029 08:20:38.897476  141243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:20:38.897670  141243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
	W1029 08:20:38.897786  141243 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21800-137232/.minikube/config/config.json: open /home/jenkins/minikube-integration/21800-137232/.minikube/config/config.json: no such file or directory
	I1029 08:20:38.898253  141243 out.go:368] Setting JSON to true
	I1029 08:20:38.899837  141243 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3768,"bootTime":1761722271,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 08:20:38.899919  141243 start.go:143] virtualization: kvm guest
	I1029 08:20:38.901975  141243 out.go:99] [download-only-643314] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 08:20:38.902118  141243 notify.go:221] Checking for updates...
	W1029 08:20:38.902140  141243 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21800-137232/.minikube/cache/preloaded-tarball: no such file or directory
	I1029 08:20:38.903252  141243 out.go:171] MINIKUBE_LOCATION=21800
	I1029 08:20:38.904650  141243 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:20:38.906170  141243 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	I1029 08:20:38.907503  141243 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	I1029 08:20:38.908615  141243 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1029 08:20:38.913819  141243 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1029 08:20:38.914174  141243 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:20:39.412542  141243 out.go:99] Using the kvm2 driver based on user configuration
	I1029 08:20:39.412586  141243 start.go:309] selected driver: kvm2
	I1029 08:20:39.412592  141243 start.go:930] validating driver "kvm2" against <nil>
	I1029 08:20:39.412931  141243 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 08:20:39.413432  141243 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1029 08:20:39.413597  141243 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1029 08:20:39.413642  141243 cni.go:84] Creating CNI manager for ""
	I1029 08:20:39.413700  141243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1029 08:20:39.413709  141243 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1029 08:20:39.413752  141243 start.go:353] cluster config:
	{Name:download-only-643314 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-643314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:20:39.413919  141243 iso.go:125] acquiring lock: {Name:mk91f2a3d67828aaa5b9f798c71cdbe9317767a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:20:39.415638  141243 out.go:99] Downloading VM boot image ...
	I1029 08:20:39.415665  141243 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21800-137232/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1029 08:20:52.473468  141243 out.go:99] Starting "download-only-643314" primary control-plane node in "download-only-643314" cluster
	I1029 08:20:52.473495  141243 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1029 08:20:52.581295  141243 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1029 08:20:52.581348  141243 cache.go:59] Caching tarball of preloaded images
	I1029 08:20:52.581510  141243 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1029 08:20:52.583132  141243 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1029 08:20:52.583155  141243 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1029 08:20:52.703517  141243 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1029 08:20:52.703628  141243 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21800-137232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-643314 host does not exist
	  To start a cluster, run: "minikube start -p download-only-643314"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-643314
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (14.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-019680 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-019680 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.012490996s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (14.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1029 08:21:25.857915  141231 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1029 08:21:25.857957  141231 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21800-137232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-019680
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-019680: exit status 85 (76.00482ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-643314 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-643314 │ jenkins │ v1.37.0 │ 29 Oct 25 08:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 29 Oct 25 08:21 UTC │ 29 Oct 25 08:21 UTC │
	│ delete  │ -p download-only-643314                                                                                                                                                 │ download-only-643314 │ jenkins │ v1.37.0 │ 29 Oct 25 08:21 UTC │ 29 Oct 25 08:21 UTC │
	│ start   │ -o=json --download-only -p download-only-019680 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-019680 │ jenkins │ v1.37.0 │ 29 Oct 25 08:21 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/29 08:21:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1029 08:21:11.900201  141535 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:21:11.900518  141535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:21:11.900528  141535 out.go:374] Setting ErrFile to fd 2...
	I1029 08:21:11.900532  141535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:21:11.900722  141535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
	I1029 08:21:11.901187  141535 out.go:368] Setting JSON to true
	I1029 08:21:11.902038  141535 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3801,"bootTime":1761722271,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 08:21:11.902138  141535 start.go:143] virtualization: kvm guest
	I1029 08:21:11.904037  141535 out.go:99] [download-only-019680] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 08:21:11.904221  141535 notify.go:221] Checking for updates...
	I1029 08:21:11.905657  141535 out.go:171] MINIKUBE_LOCATION=21800
	I1029 08:21:11.907033  141535 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:21:11.908262  141535 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	I1029 08:21:11.910104  141535 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	I1029 08:21:11.911358  141535 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1029 08:21:11.913421  141535 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1029 08:21:11.913672  141535 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:21:11.943623  141535 out.go:99] Using the kvm2 driver based on user configuration
	I1029 08:21:11.943647  141535 start.go:309] selected driver: kvm2
	I1029 08:21:11.943653  141535 start.go:930] validating driver "kvm2" against <nil>
	I1029 08:21:11.943975  141535 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1029 08:21:11.944472  141535 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1029 08:21:11.944615  141535 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1029 08:21:11.944652  141535 cni.go:84] Creating CNI manager for ""
	I1029 08:21:11.944703  141535 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1029 08:21:11.944715  141535 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1029 08:21:11.944751  141535 start.go:353] cluster config:
	{Name:download-only-019680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-019680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:21:11.944845  141535 iso.go:125] acquiring lock: {Name:mk91f2a3d67828aaa5b9f798c71cdbe9317767a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1029 08:21:11.945969  141535 out.go:99] Starting "download-only-019680" primary control-plane node in "download-only-019680" cluster
	I1029 08:21:11.945988  141535 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:21:12.057552  141535 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1029 08:21:12.057586  141535 cache.go:59] Caching tarball of preloaded images
	I1029 08:21:12.057744  141535 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1029 08:21:12.059444  141535 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1029 08:21:12.059460  141535 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1029 08:21:12.177475  141535 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1029 08:21:12.177524  141535 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21800-137232/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-019680 host does not exist
	  To start a cluster, run: "minikube start -p download-only-019680"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-019680
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (1.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1029 08:21:26.559916  141231 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-227116 --alsologtostderr --binary-mirror http://127.0.0.1:41897 --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:309: (dbg) Done: out/minikube-linux-amd64 start --download-only -p binary-mirror-227116 --alsologtostderr --binary-mirror http://127.0.0.1:41897 --driver=kvm2  --container-runtime=crio: (1.280628859s)
helpers_test.go:175: Cleaning up "binary-mirror-227116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-227116
--- PASS: TestBinaryMirror (1.60s)

                                                
                                    
x
+
TestOffline (96.35s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-574519 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-574519 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m35.317495935s)
helpers_test.go:175: Cleaning up "offline-crio-574519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-574519
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-574519: (1.031539272s)
--- PASS: TestOffline (96.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-131912
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-131912: exit status 85 (65.712033ms)

                                                
                                                
-- stdout --
	* Profile "addons-131912" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-131912"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-131912
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-131912: exit status 85 (65.544677ms)

                                                
                                                
-- stdout --
	* Profile "addons-131912" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-131912"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (137s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-131912 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-131912 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m17.004438884s)
--- PASS: TestAddons/Setup (137.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-131912 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-131912 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (12.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-131912 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-131912 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ab81e3e5-9e9b-468e-b6e6-98b1a48d05c6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ab81e3e5-9e9b-468e-b6e6-98b1a48d05c6] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 12.004081546s
addons_test.go:694: (dbg) Run:  kubectl --context addons-131912 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-131912 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-131912 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (12.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.595623ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-brxqs" [217b4645-7132-4c56-b6ad-dbce444c774f] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00237838s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-swmwd" [ebca9020-6bfa-4f43-82b9-f44f5142467e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003286429s
addons_test.go:392: (dbg) Run:  kubectl --context addons-131912 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-131912 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-131912 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.355639857s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 ip
2025/10/29 08:24:25 [DEBUG] GET http://192.168.39.91:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.12s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.64s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.223261ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-131912
addons_test.go:332: (dbg) Run:  kubectl --context addons-131912 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-zzqv6" [23340c31-f583-4fcc-8405-02ca3871d702] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004965653s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 33.112765ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-v6wds" [18d952dc-36d8-4941-857e-e4559143b825] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004027405s
addons_test.go:463: (dbg) Run:  kubectl --context addons-131912 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1029 08:24:20.236765  141231 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1029 08:24:20.239701  141231 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1029 08:24:20.239728  141231 kapi.go:107] duration metric: took 2.999952ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.011941ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-131912 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-131912 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d3d1fa66-1e6e-43a1-87a2-5e3cca2abad2] Pending
helpers_test.go:352: "task-pv-pod" [d3d1fa66-1e6e-43a1-87a2-5e3cca2abad2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [d3d1fa66-1e6e-43a1-87a2-5e3cca2abad2] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003870438s
addons_test.go:572: (dbg) Run:  kubectl --context addons-131912 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-131912 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-131912 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-131912 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-131912 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-131912 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-131912 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [57c58824-cb76-42e9-a0ce-ac0ac887804e] Pending
helpers_test.go:352: "task-pv-pod-restore" [57c58824-cb76-42e9-a0ce-ac0ac887804e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [57c58824-cb76-42e9-a0ce-ac0ac887804e] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004192837s
addons_test.go:614: (dbg) Run:  kubectl --context addons-131912 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-131912 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-131912 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-131912 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.92108198s)
--- PASS: TestAddons/parallel/CSI (50.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-131912 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-95fcm" [8ecb011b-af26-4738-9785-daf729d98466] Pending
helpers_test.go:352: "headlamp-6945c6f4d-95fcm" [8ecb011b-af26-4738-9785-daf729d98466] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-95fcm" [8ecb011b-af26-4738-9785-daf729d98466] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.003838394s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-131912 addons disable headlamp --alsologtostderr -v=1: (5.987105544s)
--- PASS: TestAddons/parallel/Headlamp (21.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-rb9lw" [15e48ad1-83f9-444b-821b-a28674eaabce] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003802055s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (59.33s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-131912 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-131912 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-131912 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [5ee68ca1-7985-4416-a094-7b7253ebfa25] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [5ee68ca1-7985-4416-a094-7b7253ebfa25] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [5ee68ca1-7985-4416-a094-7b7253ebfa25] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.003791602s
addons_test.go:967: (dbg) Run:  kubectl --context addons-131912 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 ssh "cat /opt/local-path-provisioner/pvc-4ff904af-fa12-437d-acb0-f26b2bf41ea4_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-131912 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-131912 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-131912 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.530537789s)
--- PASS: TestAddons/parallel/LocalPath (59.33s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-rxgd9" [d4d0fa6e-d26a-4f7e-ad6c-a4df4c9154ed] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005538955s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.00s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-zq9s8" [f6a40a8b-0345-4c78-a72c-4afe508e20bf] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00670409s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-131912 addons disable yakd --alsologtostderr -v=1: (5.908175361s)
--- PASS: TestAddons/parallel/Yakd (10.92s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (90.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-131912
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-131912: (1m30.104548521s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-131912
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-131912
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-131912
--- PASS: TestAddons/StoppedEnableDisable (90.31s)

                                                
                                    
x
+
TestCertOptions (52.19s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-611904 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1029 09:18:51.152520  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-611904 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (50.868704458s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-611904 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-611904 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-611904 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-611904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-611904
--- PASS: TestCertOptions (52.19s)

                                                
                                    
x
+
TestCertExpiration (280.69s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-042301 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-042301 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (57.137187653s)
E1029 09:18:28.946682  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:18:45.865158  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-042301 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-042301 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (42.237094783s)
helpers_test.go:175: Cleaning up "cert-expiration-042301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-042301
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-042301: (1.309877114s)
--- PASS: TestCertExpiration (280.69s)

                                                
                                    
x
+
TestForceSystemdFlag (82.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-964043 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-964043 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m20.984550314s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-964043 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-964043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-964043
--- PASS: TestForceSystemdFlag (82.14s)

                                                
                                    
x
+
TestForceSystemdEnv (37.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-763658 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-763658 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (36.565830612s)
helpers_test.go:175: Cleaning up "force-systemd-env-763658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-763658
--- PASS: TestForceSystemdEnv (37.47s)

                                                
                                    
x
+
TestErrorSpam/setup (38.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-397603 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-397603 --driver=kvm2  --container-runtime=crio
E1029 08:28:45.869595  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:45.876003  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:45.887396  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:45.908903  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:45.950326  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:46.031800  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:46.193362  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:46.515110  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:47.157221  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:48.438874  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:51.001884  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:28:56.123646  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:29:06.366268  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-397603 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-397603 --driver=kvm2  --container-runtime=crio: (38.223400522s)
--- PASS: TestErrorSpam/setup (38.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 status
--- PASS: TestErrorSpam/status (0.64s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (89.84s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 stop
E1029 08:29:26.848552  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:30:07.811665  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 stop: (1m26.93043588s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 stop: (1.482258301s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-397603 --log_dir /tmp/nospam-397603 stop: (1.427050685s)
--- PASS: TestErrorSpam/stop (89.84s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21800-137232/.minikube/files/etc/test/nested/copy/141231/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-373499 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1029 08:31:29.735580  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-373499 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m23.365813185s)
--- PASS: TestFunctional/serial/StartWithProxy (83.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1029 08:32:16.363444  141231 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-373499 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-373499 --alsologtostderr -v=8: (39.152477507s)
functional_test.go:678: soft start took 39.153278938s for "functional-373499" cluster.
I1029 08:32:55.516339  141231 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (39.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-373499 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-373499 cache add registry.k8s.io/pause:3.1: (1.030243434s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-373499 cache add registry.k8s.io/pause:3.3: (1.100000108s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-373499 /tmp/TestFunctionalserialCacheCmdcacheadd_local1451480886/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 cache add minikube-local-cache-test:functional-373499
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-373499 cache add minikube-local-cache-test:functional-373499: (1.904668498s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 cache delete minikube-local-cache-test:functional-373499
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-373499
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-373499 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (179.622314ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 kubectl -- --context functional-373499 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-373499 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-373499 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-373499 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.527537633s)
functional_test.go:776: restart took 40.527710314s for "functional-373499" cluster.
I1029 08:33:43.695539  141231 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (40.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-373499 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-373499 logs: (1.393601991s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 logs --file /tmp/TestFunctionalserialLogsFileCmd373471487/001/logs.txt
E1029 08:33:45.865529  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-373499 logs --file /tmp/TestFunctionalserialLogsFileCmd373471487/001/logs.txt: (1.378848271s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-373499 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-373499
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-373499: exit status 115 (251.45692ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.105:31252 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-373499 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-373499 config get cpus: exit status 14 (76.303431ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-373499 config get cpus: exit status 14 (59.444756ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-373499 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-373499 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 147205: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-373499 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-373499 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (122.571346ms)

                                                
                                                
-- stdout --
	* [functional-373499] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:33:53.714751  147141 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:33:53.715038  147141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:33:53.715052  147141 out.go:374] Setting ErrFile to fd 2...
	I1029 08:33:53.715059  147141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:33:53.715313  147141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
	I1029 08:33:53.715778  147141 out.go:368] Setting JSON to false
	I1029 08:33:53.716742  147141 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4563,"bootTime":1761722271,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 08:33:53.716838  147141 start.go:143] virtualization: kvm guest
	I1029 08:33:53.718333  147141 out.go:179] * [functional-373499] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 08:33:53.719395  147141 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:33:53.719422  147141 notify.go:221] Checking for updates...
	I1029 08:33:53.721467  147141 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:33:53.722547  147141 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	I1029 08:33:53.723533  147141 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	I1029 08:33:53.724644  147141 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 08:33:53.725757  147141 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:33:53.727188  147141 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:33:53.727696  147141 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:33:53.761430  147141 out.go:179] * Using the kvm2 driver based on existing profile
	I1029 08:33:53.762372  147141 start.go:309] selected driver: kvm2
	I1029 08:33:53.762390  147141 start.go:930] validating driver "kvm2" against &{Name:functional-373499 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-373499 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:33:53.762532  147141 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:33:53.764776  147141 out.go:203] 
	W1029 08:33:53.765828  147141 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1029 08:33:53.766790  147141 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-373499 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-373499 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-373499 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (120.205514ms)

                                                
                                                
-- stdout --
	* [functional-373499] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:33:53.590825  147126 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:33:53.590922  147126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:33:53.590927  147126 out.go:374] Setting ErrFile to fd 2...
	I1029 08:33:53.590931  147126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:33:53.591254  147126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
	I1029 08:33:53.591721  147126 out.go:368] Setting JSON to false
	I1029 08:33:53.592610  147126 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4563,"bootTime":1761722271,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 08:33:53.592702  147126 start.go:143] virtualization: kvm guest
	I1029 08:33:53.594621  147126 out.go:179] * [functional-373499] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1029 08:33:53.595881  147126 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 08:33:53.595879  147126 notify.go:221] Checking for updates...
	I1029 08:33:53.597902  147126 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 08:33:53.599221  147126 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	I1029 08:33:53.600558  147126 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	I1029 08:33:53.601879  147126 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 08:33:53.603045  147126 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 08:33:53.604649  147126 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:33:53.605104  147126 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 08:33:53.640038  147126 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1029 08:33:53.641097  147126 start.go:309] selected driver: kvm2
	I1029 08:33:53.641112  147126 start.go:930] validating driver "kvm2" against &{Name:functional-373499 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-373499 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1029 08:33:53.641218  147126 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 08:33:53.643154  147126 out.go:203] 
	W1029 08:33:53.644195  147126 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1029 08:33:53.645347  147126 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-373499 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-373499 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-2w2jb" [eb829592-a55e-4e12-90c1-aef5ba02cd00] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-2w2jb" [eb829592-a55e-4e12-90c1-aef5ba02cd00] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.006203965s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.105:32269
functional_test.go:1680: http://192.168.39.105:32269: success! body:
Request served by hello-node-connect-7d85dfc575-2w2jb

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.105:32269
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.46s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [713ab9a5-23ff-411b-884a-21d4b7477063] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003596496s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-373499 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-373499 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-373499 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-373499 apply -f testdata/storage-provisioner/pod.yaml
I1029 08:34:07.819301  141231 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [32c0bbdf-2a0b-4b9b-b721-8a4a8f90e772] Pending
helpers_test.go:352: "sp-pod" [32c0bbdf-2a0b-4b9b-b721-8a4a8f90e772] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2025/10/29 08:34:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "sp-pod" [32c0bbdf-2a0b-4b9b-b721-8a4a8f90e772] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.004683486s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-373499 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-373499 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-373499 apply -f testdata/storage-provisioner/pod.yaml
I1029 08:34:34.903808  141231 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0e70bfe8-a9a8-40b4-9561-b032f4918166] Pending
helpers_test.go:352: "sp-pod" [0e70bfe8-a9a8-40b4-9561-b032f4918166] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0e70bfe8-a9a8-40b4-9561-b032f4918166] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006847927s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-373499 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh -n functional-373499 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 cp functional-373499:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd607442026/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh -n functional-373499 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh -n functional-373499 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-373499 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-xk7xb" [5f63d0e9-90ee-462b-8102-691792677254] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-xk7xb" [5f63d0e9-90ee-462b-8102-691792677254] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.009727116s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-373499 exec mysql-5bb876957f-xk7xb -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-373499 exec mysql-5bb876957f-xk7xb -- mysql -ppassword -e "show databases;": exit status 1 (507.006628ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1029 08:34:27.217256  141231 retry.go:31] will retry after 1.377673734s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-373499 exec mysql-5bb876957f-xk7xb -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-373499 exec mysql-5bb876957f-xk7xb -- mysql -ppassword -e "show databases;": exit status 1 (118.164694ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1029 08:34:28.713658  141231 retry.go:31] will retry after 1.646444151s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-373499 exec mysql-5bb876957f-xk7xb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/141231/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "sudo cat /etc/test/nested/copy/141231/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/141231.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "sudo cat /etc/ssl/certs/141231.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/141231.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "sudo cat /usr/share/ca-certificates/141231.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1412312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "sudo cat /etc/ssl/certs/1412312.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1412312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "sudo cat /usr/share/ca-certificates/1412312.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-373499 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-373499 ssh "sudo systemctl is-active docker": exit status 1 (200.004425ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-373499 ssh "sudo systemctl is-active containerd": exit status 1 (180.228021ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-373499 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-373499 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-56fxh" [dcf9125a-e44f-4b9a-a1b8-820f27e62da9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-56fxh" [dcf9125a-e44f-4b9a-a1b8-820f27e62da9] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.00637454s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-373499 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-373499
localhost/kicbase/echo-server:functional-373499
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-373499 image ls --format short --alsologtostderr:
I1029 08:34:09.709729  147872 out.go:360] Setting OutFile to fd 1 ...
I1029 08:34:09.709975  147872 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:34:09.709983  147872 out.go:374] Setting ErrFile to fd 2...
I1029 08:34:09.709987  147872 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:34:09.710196  147872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
I1029 08:34:09.710800  147872 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:34:09.710897  147872 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:34:09.712873  147872 ssh_runner.go:195] Run: systemctl --version
I1029 08:34:09.715191  147872 main.go:143] libmachine: domain functional-373499 has defined MAC address 52:54:00:d0:f1:52 in network mk-functional-373499
I1029 08:34:09.715599  147872 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:f1:52", ip: ""} in network mk-functional-373499: {Iface:virbr1 ExpiryTime:2025-10-29 09:31:07 +0000 UTC Type:0 Mac:52:54:00:d0:f1:52 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:functional-373499 Clientid:01:52:54:00:d0:f1:52}
I1029 08:34:09.715622  147872 main.go:143] libmachine: domain functional-373499 has defined IP address 192.168.39.105 and MAC address 52:54:00:d0:f1:52 in network mk-functional-373499
I1029 08:34:09.715756  147872 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/functional-373499/id_rsa Username:docker}
I1029 08:34:09.795617  147872 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-373499 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-373499  │ 9056ab77afb8e │ 4.95MB │
│ localhost/minikube-local-cache-test     │ functional-373499  │ c136d11c7c6e6 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-373499 image ls --format table --alsologtostderr:
I1029 08:34:10.097461  147894 out.go:360] Setting OutFile to fd 1 ...
I1029 08:34:10.097752  147894 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:34:10.097763  147894 out.go:374] Setting ErrFile to fd 2...
I1029 08:34:10.097769  147894 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:34:10.097989  147894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
I1029 08:34:10.098543  147894 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:34:10.098668  147894 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:34:10.100719  147894 ssh_runner.go:195] Run: systemctl --version
I1029 08:34:10.103481  147894 main.go:143] libmachine: domain functional-373499 has defined MAC address 52:54:00:d0:f1:52 in network mk-functional-373499
I1029 08:34:10.104281  147894 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:f1:52", ip: ""} in network mk-functional-373499: {Iface:virbr1 ExpiryTime:2025-10-29 09:31:07 +0000 UTC Type:0 Mac:52:54:00:d0:f1:52 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:functional-373499 Clientid:01:52:54:00:d0:f1:52}
I1029 08:34:10.104319  147894 main.go:143] libmachine: domain functional-373499 has defined IP address 192.168.39.105 and MAC address 52:54:00:d0:f1:52 in network mk-functional-373499
I1029 08:34:10.104521  147894 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/functional-373499/id_rsa Username:docker}
I1029 08:34:10.185568  147894 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-373499 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64
bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.
io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-se
rver@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-373499"],"size":"4945146"},{"id":"c136d11c7c6e67b3ab9e524dd9f663fd4dd71fc4f79436bb21797ddbc17c2e0a","repoDigests":["localhost/minikube-local-cache-test@sha256:691b57415a3ee2297d5f4752b56ed8ed9f73ed4b14b9c5d51416eb2cd81147b2"],"repoTags":["localhost/minikube-local-cache-test:functional-373499"],"size":"3330"},{"id":"52546a367cc9e0d
924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTag
s":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317
827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-373499 image ls --format json --alsologtostderr:
I1029 08:34:09.904258  147883 out.go:360] Setting OutFile to fd 1 ...
I1029 08:34:09.904497  147883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:34:09.904505  147883 out.go:374] Setting ErrFile to fd 2...
I1029 08:34:09.904510  147883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:34:09.904724  147883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
I1029 08:34:09.905269  147883 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:34:09.905364  147883 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:34:09.907296  147883 ssh_runner.go:195] Run: systemctl --version
I1029 08:34:09.909372  147883 main.go:143] libmachine: domain functional-373499 has defined MAC address 52:54:00:d0:f1:52 in network mk-functional-373499
I1029 08:34:09.909790  147883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:f1:52", ip: ""} in network mk-functional-373499: {Iface:virbr1 ExpiryTime:2025-10-29 09:31:07 +0000 UTC Type:0 Mac:52:54:00:d0:f1:52 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:functional-373499 Clientid:01:52:54:00:d0:f1:52}
I1029 08:34:09.909816  147883 main.go:143] libmachine: domain functional-373499 has defined IP address 192.168.39.105 and MAC address 52:54:00:d0:f1:52 in network mk-functional-373499
I1029 08:34:09.910021  147883 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/functional-373499/id_rsa Username:docker}
I1029 08:34:09.990630  147883 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-373499 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c136d11c7c6e67b3ab9e524dd9f663fd4dd71fc4f79436bb21797ddbc17c2e0a
repoDigests:
- localhost/minikube-local-cache-test@sha256:691b57415a3ee2297d5f4752b56ed8ed9f73ed4b14b9c5d51416eb2cd81147b2
repoTags:
- localhost/minikube-local-cache-test:functional-373499
size: "3330"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-373499
size: "4945146"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-373499 image ls --format yaml --alsologtostderr:
I1029 08:34:10.287357  147905 out.go:360] Setting OutFile to fd 1 ...
I1029 08:34:10.287694  147905 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:34:10.287706  147905 out.go:374] Setting ErrFile to fd 2...
I1029 08:34:10.287710  147905 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:34:10.287907  147905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
I1029 08:34:10.288499  147905 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:34:10.288603  147905 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:34:10.290567  147905 ssh_runner.go:195] Run: systemctl --version
I1029 08:34:10.292590  147905 main.go:143] libmachine: domain functional-373499 has defined MAC address 52:54:00:d0:f1:52 in network mk-functional-373499
I1029 08:34:10.292982  147905 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:f1:52", ip: ""} in network mk-functional-373499: {Iface:virbr1 ExpiryTime:2025-10-29 09:31:07 +0000 UTC Type:0 Mac:52:54:00:d0:f1:52 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:functional-373499 Clientid:01:52:54:00:d0:f1:52}
I1029 08:34:10.293005  147905 main.go:143] libmachine: domain functional-373499 has defined IP address 192.168.39.105 and MAC address 52:54:00:d0:f1:52 in network mk-functional-373499
I1029 08:34:10.293153  147905 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/functional-373499/id_rsa Username:docker}
I1029 08:34:10.374897  147905 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (11.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-373499 ssh pgrep buildkitd: exit status 1 (153.953478ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image build -t localhost/my-image:functional-373499 testdata/build --alsologtostderr
E1029 08:34:13.577781  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-373499 image build -t localhost/my-image:functional-373499 testdata/build --alsologtostderr: (10.739789875s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-373499 image build -t localhost/my-image:functional-373499 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> afe25ccc9a5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-373499
--> 3ff5664b993
Successfully tagged localhost/my-image:functional-373499
3ff5664b993b6c00ae0e9c7d28600e6a29e2975c34215d3a4072b23f4fa5a7d0
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-373499 image build -t localhost/my-image:functional-373499 testdata/build --alsologtostderr:
I1029 08:34:10.630903  147927 out.go:360] Setting OutFile to fd 1 ...
I1029 08:34:10.631335  147927 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:34:10.631346  147927 out.go:374] Setting ErrFile to fd 2...
I1029 08:34:10.631353  147927 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1029 08:34:10.631547  147927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
I1029 08:34:10.632128  147927 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:34:10.632869  147927 config.go:182] Loaded profile config "functional-373499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1029 08:34:10.634987  147927 ssh_runner.go:195] Run: systemctl --version
I1029 08:34:10.636871  147927 main.go:143] libmachine: domain functional-373499 has defined MAC address 52:54:00:d0:f1:52 in network mk-functional-373499
I1029 08:34:10.637252  147927 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d0:f1:52", ip: ""} in network mk-functional-373499: {Iface:virbr1 ExpiryTime:2025-10-29 09:31:07 +0000 UTC Type:0 Mac:52:54:00:d0:f1:52 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:functional-373499 Clientid:01:52:54:00:d0:f1:52}
I1029 08:34:10.637284  147927 main.go:143] libmachine: domain functional-373499 has defined IP address 192.168.39.105 and MAC address 52:54:00:d0:f1:52 in network mk-functional-373499
I1029 08:34:10.637482  147927 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/functional-373499/id_rsa Username:docker}
I1029 08:34:10.716577  147927 build_images.go:162] Building image from path: /tmp/build.2708319269.tar
I1029 08:34:10.716665  147927 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1029 08:34:10.728202  147927 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2708319269.tar
I1029 08:34:10.732657  147927 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2708319269.tar: stat -c "%s %y" /var/lib/minikube/build/build.2708319269.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2708319269.tar': No such file or directory
I1029 08:34:10.732699  147927 ssh_runner.go:362] scp /tmp/build.2708319269.tar --> /var/lib/minikube/build/build.2708319269.tar (3072 bytes)
I1029 08:34:10.762569  147927 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2708319269
I1029 08:34:10.773651  147927 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2708319269 -xf /var/lib/minikube/build/build.2708319269.tar
I1029 08:34:10.783997  147927 crio.go:315] Building image: /var/lib/minikube/build/build.2708319269
I1029 08:34:10.784070  147927 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-373499 /var/lib/minikube/build/build.2708319269 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1029 08:34:21.274804  147927 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-373499 /var/lib/minikube/build/build.2708319269 --cgroup-manager=cgroupfs: (10.490696761s)
I1029 08:34:21.274881  147927 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2708319269
I1029 08:34:21.294750  147927 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2708319269.tar
I1029 08:34:21.307900  147927 build_images.go:218] Built localhost/my-image:functional-373499 from /tmp/build.2708319269.tar
I1029 08:34:21.307978  147927 build_images.go:134] succeeded building to: functional-373499
I1029 08:34:21.307992  147927 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (11.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.907609704s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-373499
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-373499 /tmp/TestFunctionalparallelMountCmdany-port2651630074/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761726832079816549" to /tmp/TestFunctionalparallelMountCmdany-port2651630074/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761726832079816549" to /tmp/TestFunctionalparallelMountCmdany-port2651630074/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761726832079816549" to /tmp/TestFunctionalparallelMountCmdany-port2651630074/001/test-1761726832079816549
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-373499 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (167.990833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1029 08:33:52.248124  141231 retry.go:31] will retry after 371.340416ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 29 08:33 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 29 08:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 29 08:33 test-1761726832079816549
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh cat /mount-9p/test-1761726832079816549
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-373499 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [b6864584-4aa0-4527-88d2-6ad5bc680052] Pending
helpers_test.go:352: "busybox-mount" [b6864584-4aa0-4527-88d2-6ad5bc680052] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [b6864584-4aa0-4527-88d2-6ad5bc680052] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [b6864584-4aa0-4527-88d2-6ad5bc680052] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.003078414s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-373499 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-373499 /tmp/TestFunctionalparallelMountCmdany-port2651630074/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.95s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "252.060664ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.925888ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "241.424925ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "69.013328ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image load --daemon kicbase/echo-server:functional-373499 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-373499 image load --daemon kicbase/echo-server:functional-373499 --alsologtostderr: (1.301577393s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image load --daemon kicbase/echo-server:functional-373499 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-373499
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image load --daemon kicbase/echo-server:functional-373499 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image save kicbase/echo-server:functional-373499 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image rm kicbase/echo-server:functional-373499 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-373499
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 image save --daemon kicbase/echo-server:functional-373499 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-373499
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-373499 /tmp/TestFunctionalparallelMountCmdspecific-port3051038221/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-373499 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (198.744271ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1029 08:34:01.231736  141231 retry.go:31] will retry after 460.733066ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-373499 /tmp/TestFunctionalparallelMountCmdspecific-port3051038221/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-373499 ssh "sudo umount -f /mount-9p": exit status 1 (165.71389ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-373499 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-373499 /tmp/TestFunctionalparallelMountCmdspecific-port3051038221/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 service list -o json
functional_test.go:1504: Took "422.095965ms" to run "out/minikube-linux-amd64 -p functional-373499 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.105:32383
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-373499 /tmp/TestFunctionalparallelMountCmdVerifyCleanup506578685/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-373499 /tmp/TestFunctionalparallelMountCmdVerifyCleanup506578685/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-373499 /tmp/TestFunctionalparallelMountCmdVerifyCleanup506578685/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-373499 ssh "findmnt -T" /mount1: exit status 1 (180.553595ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1029 08:34:02.576182  141231 retry.go:31] will retry after 362.544478ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-373499 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-373499 /tmp/TestFunctionalparallelMountCmdVerifyCleanup506578685/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-373499 /tmp/TestFunctionalparallelMountCmdVerifyCleanup506578685/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-373499 /tmp/TestFunctionalparallelMountCmdVerifyCleanup506578685/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-373499 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.105:32383
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-373499
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-373499
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-373499
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (236.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-523597 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m55.844554876s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (236.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-523597 kubectl -- rollout status deployment/busybox: (5.333152252s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-dt98z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-w8t26 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-z6hr6 -- nslookup kubernetes.io
E1029 08:38:45.865139  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-dt98z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-w8t26 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-z6hr6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-dt98z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-w8t26 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-z6hr6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-dt98z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-dt98z -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-w8t26 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-w8t26 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-z6hr6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 kubectl -- exec busybox-7b57f96db7-z6hr6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 node add --alsologtostderr -v 5
E1029 08:38:51.151995  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:38:51.158443  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:38:51.169838  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:38:51.191183  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:38:51.232569  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:38:51.314042  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:38:51.475643  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:38:51.797961  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:38:52.439479  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:38:53.721458  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:38:56.283501  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:39:01.405172  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:39:11.646954  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:39:32.128574  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-523597 node add --alsologtostderr -v 5: (45.804659428s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-523597 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp testdata/cp-test.txt ha-523597:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3804690610/001/cp-test_ha-523597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597:/home/docker/cp-test.txt ha-523597-m02:/home/docker/cp-test_ha-523597_ha-523597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m02 "sudo cat /home/docker/cp-test_ha-523597_ha-523597-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597:/home/docker/cp-test.txt ha-523597-m03:/home/docker/cp-test_ha-523597_ha-523597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m03 "sudo cat /home/docker/cp-test_ha-523597_ha-523597-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597:/home/docker/cp-test.txt ha-523597-m04:/home/docker/cp-test_ha-523597_ha-523597-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m04 "sudo cat /home/docker/cp-test_ha-523597_ha-523597-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp testdata/cp-test.txt ha-523597-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3804690610/001/cp-test_ha-523597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597-m02:/home/docker/cp-test.txt ha-523597:/home/docker/cp-test_ha-523597-m02_ha-523597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597 "sudo cat /home/docker/cp-test_ha-523597-m02_ha-523597.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597-m02:/home/docker/cp-test.txt ha-523597-m03:/home/docker/cp-test_ha-523597-m02_ha-523597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m03 "sudo cat /home/docker/cp-test_ha-523597-m02_ha-523597-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597-m02:/home/docker/cp-test.txt ha-523597-m04:/home/docker/cp-test_ha-523597-m02_ha-523597-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m04 "sudo cat /home/docker/cp-test_ha-523597-m02_ha-523597-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp testdata/cp-test.txt ha-523597-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3804690610/001/cp-test_ha-523597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597-m03:/home/docker/cp-test.txt ha-523597:/home/docker/cp-test_ha-523597-m03_ha-523597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597 "sudo cat /home/docker/cp-test_ha-523597-m03_ha-523597.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597-m03:/home/docker/cp-test.txt ha-523597-m02:/home/docker/cp-test_ha-523597-m03_ha-523597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m02 "sudo cat /home/docker/cp-test_ha-523597-m03_ha-523597-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597-m03:/home/docker/cp-test.txt ha-523597-m04:/home/docker/cp-test_ha-523597-m03_ha-523597-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m04 "sudo cat /home/docker/cp-test_ha-523597-m03_ha-523597-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp testdata/cp-test.txt ha-523597-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3804690610/001/cp-test_ha-523597-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597-m04:/home/docker/cp-test.txt ha-523597:/home/docker/cp-test_ha-523597-m04_ha-523597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597 "sudo cat /home/docker/cp-test_ha-523597-m04_ha-523597.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597-m04:/home/docker/cp-test.txt ha-523597-m02:/home/docker/cp-test_ha-523597-m04_ha-523597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m02 "sudo cat /home/docker/cp-test_ha-523597-m04_ha-523597-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 cp ha-523597-m04:/home/docker/cp-test.txt ha-523597-m03:/home/docker/cp-test_ha-523597-m04_ha-523597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 ssh -n ha-523597-m03 "sudo cat /home/docker/cp-test_ha-523597-m04_ha-523597-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (69.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 node stop m02 --alsologtostderr -v 5
E1029 08:40:13.090649  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-523597 node stop m02 --alsologtostderr -v 5: (1m9.40121907s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-523597 status --alsologtostderr -v 5: exit status 7 (497.086799ms)

                                                
                                                
-- stdout --
	ha-523597
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-523597-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-523597-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-523597-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:40:55.644144  151161 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:40:55.644505  151161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:40:55.644515  151161 out.go:374] Setting ErrFile to fd 2...
	I1029 08:40:55.644520  151161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:40:55.644726  151161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
	I1029 08:40:55.644909  151161 out.go:368] Setting JSON to false
	I1029 08:40:55.644939  151161 mustload.go:66] Loading cluster: ha-523597
	I1029 08:40:55.645034  151161 notify.go:221] Checking for updates...
	I1029 08:40:55.645473  151161 config.go:182] Loaded profile config "ha-523597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:40:55.645492  151161 status.go:174] checking status of ha-523597 ...
	I1029 08:40:55.647917  151161 status.go:371] ha-523597 host status = "Running" (err=<nil>)
	I1029 08:40:55.647940  151161 host.go:66] Checking if "ha-523597" exists ...
	I1029 08:40:55.651044  151161 main.go:143] libmachine: domain ha-523597 has defined MAC address 52:54:00:7a:98:c6 in network mk-ha-523597
	I1029 08:40:55.651612  151161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:c6", ip: ""} in network mk-ha-523597: {Iface:virbr1 ExpiryTime:2025-10-29 09:34:57 +0000 UTC Type:0 Mac:52:54:00:7a:98:c6 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-523597 Clientid:01:52:54:00:7a:98:c6}
	I1029 08:40:55.651641  151161 main.go:143] libmachine: domain ha-523597 has defined IP address 192.168.39.120 and MAC address 52:54:00:7a:98:c6 in network mk-ha-523597
	I1029 08:40:55.651857  151161 host.go:66] Checking if "ha-523597" exists ...
	I1029 08:40:55.652153  151161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:40:55.654930  151161 main.go:143] libmachine: domain ha-523597 has defined MAC address 52:54:00:7a:98:c6 in network mk-ha-523597
	I1029 08:40:55.655449  151161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:c6", ip: ""} in network mk-ha-523597: {Iface:virbr1 ExpiryTime:2025-10-29 09:34:57 +0000 UTC Type:0 Mac:52:54:00:7a:98:c6 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-523597 Clientid:01:52:54:00:7a:98:c6}
	I1029 08:40:55.655486  151161 main.go:143] libmachine: domain ha-523597 has defined IP address 192.168.39.120 and MAC address 52:54:00:7a:98:c6 in network mk-ha-523597
	I1029 08:40:55.655684  151161 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/ha-523597/id_rsa Username:docker}
	I1029 08:40:55.746198  151161 ssh_runner.go:195] Run: systemctl --version
	I1029 08:40:55.753524  151161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:40:55.770965  151161 kubeconfig.go:125] found "ha-523597" server: "https://192.168.39.254:8443"
	I1029 08:40:55.771006  151161 api_server.go:166] Checking apiserver status ...
	I1029 08:40:55.771051  151161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:40:55.790034  151161 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1398/cgroup
	W1029 08:40:55.800704  151161 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1398/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:40:55.800776  151161 ssh_runner.go:195] Run: ls
	I1029 08:40:55.806065  151161 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1029 08:40:55.810894  151161 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1029 08:40:55.810915  151161 status.go:463] ha-523597 apiserver status = Running (err=<nil>)
	I1029 08:40:55.810927  151161 status.go:176] ha-523597 status: &{Name:ha-523597 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:40:55.810948  151161 status.go:174] checking status of ha-523597-m02 ...
	I1029 08:40:55.812534  151161 status.go:371] ha-523597-m02 host status = "Stopped" (err=<nil>)
	I1029 08:40:55.812551  151161 status.go:384] host is not running, skipping remaining checks
	I1029 08:40:55.812556  151161 status.go:176] ha-523597-m02 status: &{Name:ha-523597-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:40:55.812573  151161 status.go:174] checking status of ha-523597-m03 ...
	I1029 08:40:55.814111  151161 status.go:371] ha-523597-m03 host status = "Running" (err=<nil>)
	I1029 08:40:55.814127  151161 host.go:66] Checking if "ha-523597-m03" exists ...
	I1029 08:40:55.816349  151161 main.go:143] libmachine: domain ha-523597-m03 has defined MAC address 52:54:00:cc:21:d9 in network mk-ha-523597
	I1029 08:40:55.816728  151161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cc:21:d9", ip: ""} in network mk-ha-523597: {Iface:virbr1 ExpiryTime:2025-10-29 09:37:25 +0000 UTC Type:0 Mac:52:54:00:cc:21:d9 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-523597-m03 Clientid:01:52:54:00:cc:21:d9}
	I1029 08:40:55.816749  151161 main.go:143] libmachine: domain ha-523597-m03 has defined IP address 192.168.39.188 and MAC address 52:54:00:cc:21:d9 in network mk-ha-523597
	I1029 08:40:55.816891  151161 host.go:66] Checking if "ha-523597-m03" exists ...
	I1029 08:40:55.817073  151161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:40:55.819144  151161 main.go:143] libmachine: domain ha-523597-m03 has defined MAC address 52:54:00:cc:21:d9 in network mk-ha-523597
	I1029 08:40:55.819497  151161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cc:21:d9", ip: ""} in network mk-ha-523597: {Iface:virbr1 ExpiryTime:2025-10-29 09:37:25 +0000 UTC Type:0 Mac:52:54:00:cc:21:d9 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-523597-m03 Clientid:01:52:54:00:cc:21:d9}
	I1029 08:40:55.819524  151161 main.go:143] libmachine: domain ha-523597-m03 has defined IP address 192.168.39.188 and MAC address 52:54:00:cc:21:d9 in network mk-ha-523597
	I1029 08:40:55.819658  151161 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/ha-523597-m03/id_rsa Username:docker}
	I1029 08:40:55.908319  151161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:40:55.925283  151161 kubeconfig.go:125] found "ha-523597" server: "https://192.168.39.254:8443"
	I1029 08:40:55.925313  151161 api_server.go:166] Checking apiserver status ...
	I1029 08:40:55.925344  151161 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 08:40:55.944451  151161 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1759/cgroup
	W1029 08:40:55.955462  151161 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1759/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1029 08:40:55.955539  151161 ssh_runner.go:195] Run: ls
	I1029 08:40:55.960223  151161 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1029 08:40:55.965174  151161 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1029 08:40:55.965196  151161 status.go:463] ha-523597-m03 apiserver status = Running (err=<nil>)
	I1029 08:40:55.965206  151161 status.go:176] ha-523597-m03 status: &{Name:ha-523597-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:40:55.965221  151161 status.go:174] checking status of ha-523597-m04 ...
	I1029 08:40:55.967171  151161 status.go:371] ha-523597-m04 host status = "Running" (err=<nil>)
	I1029 08:40:55.967196  151161 host.go:66] Checking if "ha-523597-m04" exists ...
	I1029 08:40:55.969798  151161 main.go:143] libmachine: domain ha-523597-m04 has defined MAC address 52:54:00:70:eb:b0 in network mk-ha-523597
	I1029 08:40:55.970254  151161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:eb:b0", ip: ""} in network mk-ha-523597: {Iface:virbr1 ExpiryTime:2025-10-29 09:39:04 +0000 UTC Type:0 Mac:52:54:00:70:eb:b0 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-523597-m04 Clientid:01:52:54:00:70:eb:b0}
	I1029 08:40:55.970275  151161 main.go:143] libmachine: domain ha-523597-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:70:eb:b0 in network mk-ha-523597
	I1029 08:40:55.970431  151161 host.go:66] Checking if "ha-523597-m04" exists ...
	I1029 08:40:55.970648  151161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 08:40:55.972906  151161 main.go:143] libmachine: domain ha-523597-m04 has defined MAC address 52:54:00:70:eb:b0 in network mk-ha-523597
	I1029 08:40:55.973337  151161 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:eb:b0", ip: ""} in network mk-ha-523597: {Iface:virbr1 ExpiryTime:2025-10-29 09:39:04 +0000 UTC Type:0 Mac:52:54:00:70:eb:b0 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-523597-m04 Clientid:01:52:54:00:70:eb:b0}
	I1029 08:40:55.973366  151161 main.go:143] libmachine: domain ha-523597-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:70:eb:b0 in network mk-ha-523597
	I1029 08:40:55.973530  151161 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/ha-523597-m04/id_rsa Username:docker}
	I1029 08:40:56.059174  151161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 08:40:56.076359  151161 status.go:176] ha-523597-m04 status: &{Name:ha-523597-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (69.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-523597 node start m02 --alsologtostderr -v 5: (33.454887104s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (358.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 stop --alsologtostderr -v 5
E1029 08:41:35.012327  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:45.866357  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:43:51.156252  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:44:18.854128  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:45:08.941281  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-523597 stop --alsologtostderr -v 5: (4m6.672910348s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-523597 start --wait true --alsologtostderr -v 5: (1m51.552491498s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (358.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-523597 node delete m03 --alsologtostderr -v 5: (17.777309761s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (250.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 stop --alsologtostderr -v 5
E1029 08:48:45.864990  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:48:51.155779  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-523597 stop --alsologtostderr -v 5: (4m10.461314809s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-523597 status --alsologtostderr -v 5: exit status 7 (63.973582ms)

                                                
                                                
-- stdout --
	ha-523597
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-523597-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-523597-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 08:51:59.414972  154324 out.go:360] Setting OutFile to fd 1 ...
	I1029 08:51:59.415251  154324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:51:59.415261  154324 out.go:374] Setting ErrFile to fd 2...
	I1029 08:51:59.415266  154324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 08:51:59.415489  154324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
	I1029 08:51:59.415695  154324 out.go:368] Setting JSON to false
	I1029 08:51:59.415725  154324 mustload.go:66] Loading cluster: ha-523597
	I1029 08:51:59.415831  154324 notify.go:221] Checking for updates...
	I1029 08:51:59.416201  154324 config.go:182] Loaded profile config "ha-523597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 08:51:59.416220  154324 status.go:174] checking status of ha-523597 ...
	I1029 08:51:59.418286  154324 status.go:371] ha-523597 host status = "Stopped" (err=<nil>)
	I1029 08:51:59.418304  154324 status.go:384] host is not running, skipping remaining checks
	I1029 08:51:59.418311  154324 status.go:176] ha-523597 status: &{Name:ha-523597 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:51:59.418329  154324 status.go:174] checking status of ha-523597-m02 ...
	I1029 08:51:59.419455  154324 status.go:371] ha-523597-m02 host status = "Stopped" (err=<nil>)
	I1029 08:51:59.419473  154324 status.go:384] host is not running, skipping remaining checks
	I1029 08:51:59.419492  154324 status.go:176] ha-523597-m02 status: &{Name:ha-523597-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 08:51:59.419516  154324 status.go:174] checking status of ha-523597-m04 ...
	I1029 08:51:59.420528  154324 status.go:371] ha-523597-m04 host status = "Stopped" (err=<nil>)
	I1029 08:51:59.420541  154324 status.go:384] host is not running, skipping remaining checks
	I1029 08:51:59.420545  154324 status.go:176] ha-523597-m04 status: &{Name:ha-523597-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (250.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (87.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-523597 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m26.533689539s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (87.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 node add --control-plane --alsologtostderr -v 5
E1029 08:53:45.865377  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:53:51.151665  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-523597 node add --control-plane --alsologtostderr -v 5: (1m12.073018585s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-523597 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.09s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-504050 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1029 08:55:14.218023  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-504050 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m17.086820095s)
--- PASS: TestJSONOutput/start/Command (77.09s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-504050 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-504050 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-504050 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-504050 --output=json --user=testUser: (6.78985123s)
--- PASS: TestJSONOutput/stop/Command (6.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-378903 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-378903 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (71.886788ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"32d8b805-bb1f-40ba-acbe-aef093289383","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-378903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf0feb3f-6eb4-4dd3-9346-0bf0e8e586f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21800"}}
	{"specversion":"1.0","id":"5d3c28b9-cb31-4c0a-89c5-f71574d49e92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c5b87cfc-698a-4e55-a455-65b2a4ba6f30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig"}}
	{"specversion":"1.0","id":"43605357-278b-4d4b-9cbe-1120558003ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube"}}
	{"specversion":"1.0","id":"4cb7c300-8665-44b5-b91d-0de76897497f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2206ab46-040e-4971-9798-694a8c771153","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8e7dab9b-e152-4370-930f-9a5b64d23aa4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-378903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-378903
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (73.21s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-962663 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-962663 --driver=kvm2  --container-runtime=crio: (35.318676987s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-965191 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-965191 --driver=kvm2  --container-runtime=crio: (35.277934377s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-962663
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-965191
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-965191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-965191
helpers_test.go:175: Cleaning up "first-962663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-962663
--- PASS: TestMinikubeProfile (73.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-196839 --memory=3072 --mount-string /tmp/TestMountStartserial2494240596/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-196839 --memory=3072 --mount-string /tmp/TestMountStartserial2494240596/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.866805874s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-196839 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-196839 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-222535 --memory=3072 --mount-string /tmp/TestMountStartserial2494240596/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-222535 --memory=3072 --mount-string /tmp/TestMountStartserial2494240596/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.647513171s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-222535 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-222535 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-196839 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-222535 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-222535 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-222535
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-222535: (1.208026862s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.42s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-222535
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-222535: (17.42109458s)
--- PASS: TestMountStart/serial/RestartStopped (18.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-222535 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-222535 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (94.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-181321 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1029 08:58:45.865596  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 08:58:51.152478  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-181321 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m34.173169675s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (94.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-181321 -- rollout status deployment/busybox: (4.703099078s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- exec busybox-7b57f96db7-crhr4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- exec busybox-7b57f96db7-k8j6v -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- exec busybox-7b57f96db7-crhr4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- exec busybox-7b57f96db7-k8j6v -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- exec busybox-7b57f96db7-crhr4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- exec busybox-7b57f96db7-k8j6v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- exec busybox-7b57f96db7-crhr4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- exec busybox-7b57f96db7-crhr4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- exec busybox-7b57f96db7-k8j6v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181321 -- exec busybox-7b57f96db7-k8j6v -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-181321 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-181321 -v=5 --alsologtostderr: (44.954734649s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.39s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-181321 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 cp testdata/cp-test.txt multinode-181321:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 cp multinode-181321:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile45025131/001/cp-test_multinode-181321.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 cp multinode-181321:/home/docker/cp-test.txt multinode-181321-m02:/home/docker/cp-test_multinode-181321_multinode-181321-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321-m02 "sudo cat /home/docker/cp-test_multinode-181321_multinode-181321-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 cp multinode-181321:/home/docker/cp-test.txt multinode-181321-m03:/home/docker/cp-test_multinode-181321_multinode-181321-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321-m03 "sudo cat /home/docker/cp-test_multinode-181321_multinode-181321-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 cp testdata/cp-test.txt multinode-181321-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 cp multinode-181321-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile45025131/001/cp-test_multinode-181321-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 cp multinode-181321-m02:/home/docker/cp-test.txt multinode-181321:/home/docker/cp-test_multinode-181321-m02_multinode-181321.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321 "sudo cat /home/docker/cp-test_multinode-181321-m02_multinode-181321.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 cp multinode-181321-m02:/home/docker/cp-test.txt multinode-181321-m03:/home/docker/cp-test_multinode-181321-m02_multinode-181321-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321-m03 "sudo cat /home/docker/cp-test_multinode-181321-m02_multinode-181321-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 cp testdata/cp-test.txt multinode-181321-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 cp multinode-181321-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile45025131/001/cp-test_multinode-181321-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 cp multinode-181321-m03:/home/docker/cp-test.txt multinode-181321:/home/docker/cp-test_multinode-181321-m03_multinode-181321.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321 "sudo cat /home/docker/cp-test_multinode-181321-m03_multinode-181321.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 cp multinode-181321-m03:/home/docker/cp-test.txt multinode-181321-m02:/home/docker/cp-test_multinode-181321-m03_multinode-181321-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 ssh -n multinode-181321-m02 "sudo cat /home/docker/cp-test_multinode-181321-m03_multinode-181321-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-181321 node stop m03: (1.745725372s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-181321 status: exit status 7 (322.106866ms)

                                                
                                                
-- stdout --
	multinode-181321
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-181321-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-181321-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-181321 status --alsologtostderr: exit status 7 (323.694742ms)

                                                
                                                
-- stdout --
	multinode-181321
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-181321-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-181321-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:01:02.362815  159710 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:01:02.363128  159710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:01:02.363139  159710 out.go:374] Setting ErrFile to fd 2...
	I1029 09:01:02.363144  159710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:01:02.363336  159710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
	I1029 09:01:02.363531  159710 out.go:368] Setting JSON to false
	I1029 09:01:02.363557  159710 mustload.go:66] Loading cluster: multinode-181321
	I1029 09:01:02.363608  159710 notify.go:221] Checking for updates...
	I1029 09:01:02.364598  159710 config.go:182] Loaded profile config "multinode-181321": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:01:02.364637  159710 status.go:174] checking status of multinode-181321 ...
	I1029 09:01:02.367239  159710 status.go:371] multinode-181321 host status = "Running" (err=<nil>)
	I1029 09:01:02.367264  159710 host.go:66] Checking if "multinode-181321" exists ...
	I1029 09:01:02.369571  159710 main.go:143] libmachine: domain multinode-181321 has defined MAC address 52:54:00:d5:d9:21 in network mk-multinode-181321
	I1029 09:01:02.370046  159710 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d5:d9:21", ip: ""} in network mk-multinode-181321: {Iface:virbr1 ExpiryTime:2025-10-29 09:58:41 +0000 UTC Type:0 Mac:52:54:00:d5:d9:21 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:multinode-181321 Clientid:01:52:54:00:d5:d9:21}
	I1029 09:01:02.370090  159710 main.go:143] libmachine: domain multinode-181321 has defined IP address 192.168.39.113 and MAC address 52:54:00:d5:d9:21 in network mk-multinode-181321
	I1029 09:01:02.370217  159710 host.go:66] Checking if "multinode-181321" exists ...
	I1029 09:01:02.370458  159710 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:01:02.372354  159710 main.go:143] libmachine: domain multinode-181321 has defined MAC address 52:54:00:d5:d9:21 in network mk-multinode-181321
	I1029 09:01:02.372715  159710 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d5:d9:21", ip: ""} in network mk-multinode-181321: {Iface:virbr1 ExpiryTime:2025-10-29 09:58:41 +0000 UTC Type:0 Mac:52:54:00:d5:d9:21 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:multinode-181321 Clientid:01:52:54:00:d5:d9:21}
	I1029 09:01:02.372743  159710 main.go:143] libmachine: domain multinode-181321 has defined IP address 192.168.39.113 and MAC address 52:54:00:d5:d9:21 in network mk-multinode-181321
	I1029 09:01:02.372876  159710 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/multinode-181321/id_rsa Username:docker}
	I1029 09:01:02.453739  159710 ssh_runner.go:195] Run: systemctl --version
	I1029 09:01:02.460142  159710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:01:02.481781  159710 kubeconfig.go:125] found "multinode-181321" server: "https://192.168.39.113:8443"
	I1029 09:01:02.481821  159710 api_server.go:166] Checking apiserver status ...
	I1029 09:01:02.481861  159710 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1029 09:01:02.500661  159710 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1350/cgroup
	W1029 09:01:02.512489  159710 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1350/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1029 09:01:02.512557  159710 ssh_runner.go:195] Run: ls
	I1029 09:01:02.517484  159710 api_server.go:253] Checking apiserver healthz at https://192.168.39.113:8443/healthz ...
	I1029 09:01:02.522133  159710 api_server.go:279] https://192.168.39.113:8443/healthz returned 200:
	ok
	I1029 09:01:02.522157  159710 status.go:463] multinode-181321 apiserver status = Running (err=<nil>)
	I1029 09:01:02.522168  159710 status.go:176] multinode-181321 status: &{Name:multinode-181321 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 09:01:02.522185  159710 status.go:174] checking status of multinode-181321-m02 ...
	I1029 09:01:02.523704  159710 status.go:371] multinode-181321-m02 host status = "Running" (err=<nil>)
	I1029 09:01:02.523721  159710 host.go:66] Checking if "multinode-181321-m02" exists ...
	I1029 09:01:02.526026  159710 main.go:143] libmachine: domain multinode-181321-m02 has defined MAC address 52:54:00:30:e1:ea in network mk-multinode-181321
	I1029 09:01:02.526383  159710 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:e1:ea", ip: ""} in network mk-multinode-181321: {Iface:virbr1 ExpiryTime:2025-10-29 09:59:34 +0000 UTC Type:0 Mac:52:54:00:30:e1:ea Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:multinode-181321-m02 Clientid:01:52:54:00:30:e1:ea}
	I1029 09:01:02.526417  159710 main.go:143] libmachine: domain multinode-181321-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:30:e1:ea in network mk-multinode-181321
	I1029 09:01:02.526562  159710 host.go:66] Checking if "multinode-181321-m02" exists ...
	I1029 09:01:02.526750  159710 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1029 09:01:02.528789  159710 main.go:143] libmachine: domain multinode-181321-m02 has defined MAC address 52:54:00:30:e1:ea in network mk-multinode-181321
	I1029 09:01:02.529160  159710 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:30:e1:ea", ip: ""} in network mk-multinode-181321: {Iface:virbr1 ExpiryTime:2025-10-29 09:59:34 +0000 UTC Type:0 Mac:52:54:00:30:e1:ea Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:multinode-181321-m02 Clientid:01:52:54:00:30:e1:ea}
	I1029 09:01:02.529179  159710 main.go:143] libmachine: domain multinode-181321-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:30:e1:ea in network mk-multinode-181321
	I1029 09:01:02.529335  159710 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21800-137232/.minikube/machines/multinode-181321-m02/id_rsa Username:docker}
	I1029 09:01:02.606342  159710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1029 09:01:02.622362  159710 status.go:176] multinode-181321-m02 status: &{Name:multinode-181321-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1029 09:01:02.622399  159710 status.go:174] checking status of multinode-181321-m03 ...
	I1029 09:01:02.624260  159710 status.go:371] multinode-181321-m03 host status = "Stopped" (err=<nil>)
	I1029 09:01:02.624281  159710 status.go:384] host is not running, skipping remaining checks
	I1029 09:01:02.624303  159710 status.go:176] multinode-181321-m03 status: &{Name:multinode-181321-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-181321 node start m03 -v=5 --alsologtostderr: (39.151091112s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (284.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-181321
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-181321
E1029 09:01:48.944610  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:03:45.868979  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:03:51.155652  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-181321: (2m41.71268966s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-181321 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-181321 --wait=true -v=5 --alsologtostderr: (2m2.827244477s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-181321
--- PASS: TestMultiNode/serial/RestartKeepsNodes (284.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-181321 node delete m03: (2.145545786s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (154s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 stop
E1029 09:08:45.864987  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:08:51.151168  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-181321 stop: (2m33.869756257s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-181321 status: exit status 7 (65.638479ms)

                                                
                                                
-- stdout --
	multinode-181321
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-181321-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-181321 status --alsologtostderr: exit status 7 (65.62749ms)

                                                
                                                
-- stdout --
	multinode-181321
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-181321-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:09:03.508992  162066 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:09:03.509327  162066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:09:03.509339  162066 out.go:374] Setting ErrFile to fd 2...
	I1029 09:09:03.509343  162066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:09:03.509570  162066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
	I1029 09:09:03.509740  162066 out.go:368] Setting JSON to false
	I1029 09:09:03.509763  162066 mustload.go:66] Loading cluster: multinode-181321
	I1029 09:09:03.509889  162066 notify.go:221] Checking for updates...
	I1029 09:09:03.510124  162066 config.go:182] Loaded profile config "multinode-181321": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:09:03.510138  162066 status.go:174] checking status of multinode-181321 ...
	I1029 09:09:03.512287  162066 status.go:371] multinode-181321 host status = "Stopped" (err=<nil>)
	I1029 09:09:03.512305  162066 status.go:384] host is not running, skipping remaining checks
	I1029 09:09:03.512318  162066 status.go:176] multinode-181321 status: &{Name:multinode-181321 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1029 09:09:03.512347  162066 status.go:174] checking status of multinode-181321-m02 ...
	I1029 09:09:03.513508  162066 status.go:371] multinode-181321-m02 host status = "Stopped" (err=<nil>)
	I1029 09:09:03.513523  162066 status.go:384] host is not running, skipping remaining checks
	I1029 09:09:03.513527  162066 status.go:176] multinode-181321-m02 status: &{Name:multinode-181321-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (154.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (112.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-181321 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-181321 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.392024275s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181321 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (112.84s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-181321
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-181321-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-181321-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (77.699095ms)

                                                
                                                
-- stdout --
	* [multinode-181321-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-181321-m02' is duplicated with machine name 'multinode-181321-m02' in profile 'multinode-181321'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-181321-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-181321-m03 --driver=kvm2  --container-runtime=crio: (36.696239307s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-181321
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-181321: exit status 80 (203.303076ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-181321 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-181321-m03 already exists in multinode-181321-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-181321-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.88s)

                                                
                                    
x
+
TestScheduledStopUnix (107.43s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-627765 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-627765 --memory=3072 --driver=kvm2  --container-runtime=crio: (35.737182247s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-627765 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-627765 -n scheduled-stop-627765
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-627765 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1029 09:14:34.639623  141231 retry.go:31] will retry after 66.598µs: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.639799  141231 retry.go:31] will retry after 97.359µs: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.640954  141231 retry.go:31] will retry after 306.311µs: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.642120  141231 retry.go:31] will retry after 500.016µs: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.643262  141231 retry.go:31] will retry after 671.084µs: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.644417  141231 retry.go:31] will retry after 392.737µs: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.645556  141231 retry.go:31] will retry after 1.154062ms: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.647747  141231 retry.go:31] will retry after 2.308166ms: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.650932  141231 retry.go:31] will retry after 3.116744ms: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.655151  141231 retry.go:31] will retry after 5.266239ms: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.661448  141231 retry.go:31] will retry after 7.022037ms: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.668624  141231 retry.go:31] will retry after 10.432706ms: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.679879  141231 retry.go:31] will retry after 12.276162ms: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.693133  141231 retry.go:31] will retry after 22.905827ms: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.716464  141231 retry.go:31] will retry after 31.041538ms: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
I1029 09:14:34.747786  141231 retry.go:31] will retry after 39.524462ms: open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/scheduled-stop-627765/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-627765 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-627765 -n scheduled-stop-627765
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-627765
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-627765 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-627765
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-627765: exit status 7 (64.19794ms)

                                                
                                                
-- stdout --
	scheduled-stop-627765
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-627765 -n scheduled-stop-627765
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-627765 -n scheduled-stop-627765: exit status 7 (62.779604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-627765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-627765
--- PASS: TestScheduledStopUnix (107.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (123.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1817661280 start -p running-upgrade-882934 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1817661280 start -p running-upgrade-882934 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m19.754582165s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-882934 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-882934 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.070670739s)
helpers_test.go:175: Cleaning up "running-upgrade-882934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-882934
--- PASS: TestRunningBinaryUpgrade (123.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (210.39s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-642154 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-642154 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.807992483s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-642154
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-642154: (2.193635607s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-642154 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-642154 status --format={{.Host}}: exit status 7 (69.905693ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-642154 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-642154 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.972385626s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-642154 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-642154 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-642154 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (82.38063ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-642154] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-642154
	    minikube start -p kubernetes-upgrade-642154 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6421542 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-642154 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-642154 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-642154 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m20.338443268s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-642154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-642154
--- PASS: TestKubernetesUpgrade (210.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-598598 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-598598 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (103.293501ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-598598] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-598598 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-598598 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m38.665794895s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-598598 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-588311 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-588311 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (152.033772ms)

                                                
                                                
-- stdout --
	* [false-588311] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21800
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1029 09:16:27.684449  166204 out.go:360] Setting OutFile to fd 1 ...
	I1029 09:16:27.684725  166204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:16:27.684731  166204 out.go:374] Setting ErrFile to fd 2...
	I1029 09:16:27.684736  166204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1029 09:16:27.685145  166204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21800-137232/.minikube/bin
	I1029 09:16:27.685746  166204 out.go:368] Setting JSON to false
	I1029 09:16:27.686889  166204 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7117,"bootTime":1761722271,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1029 09:16:27.686952  166204 start.go:143] virtualization: kvm guest
	I1029 09:16:27.689091  166204 out.go:179] * [false-588311] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1029 09:16:27.691146  166204 out.go:179]   - MINIKUBE_LOCATION=21800
	I1029 09:16:27.691152  166204 notify.go:221] Checking for updates...
	I1029 09:16:27.694682  166204 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1029 09:16:27.696014  166204 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21800-137232/kubeconfig
	I1029 09:16:27.697099  166204 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21800-137232/.minikube
	I1029 09:16:27.698311  166204 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1029 09:16:27.699511  166204 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1029 09:16:27.701044  166204 config.go:182] Loaded profile config "NoKubernetes-598598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:16:27.701191  166204 config.go:182] Loaded profile config "kubernetes-upgrade-642154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1029 09:16:27.701327  166204 config.go:182] Loaded profile config "offline-crio-574519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1029 09:16:27.701590  166204 driver.go:422] Setting default libvirt URI to qemu:///system
	I1029 09:16:27.750183  166204 out.go:179] * Using the kvm2 driver based on user configuration
	I1029 09:16:27.751471  166204 start.go:309] selected driver: kvm2
	I1029 09:16:27.751492  166204 start.go:930] validating driver "kvm2" against <nil>
	I1029 09:16:27.751510  166204 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1029 09:16:27.754098  166204 out.go:203] 
	W1029 09:16:27.758912  166204 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1029 09:16:27.760063  166204 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-588311 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-588311

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-588311

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-588311

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-588311

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-588311

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-588311

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-588311

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-588311

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-588311

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-588311

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-588311

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-588311" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-588311" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-588311

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-588311"

                                                
                                                
----------------------- debugLogs end: false-588311 [took: 5.857737421s] --------------------------------
helpers_test.go:175: Cleaning up "false-588311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-588311
--- PASS: TestNetworkPlugins/group/false (6.19s)

                                                
                                    
x
+
TestISOImage/Setup (47.65s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p guest-549168 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p guest-549168 --no-kubernetes --driver=kvm2  --container-runtime=crio: (47.650822707s)
--- PASS: TestISOImage/Setup (47.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (44.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-598598 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-598598 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.204795955s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-598598 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-598598 status -o json: exit status 2 (224.815696ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-598598","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-598598
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (44.39s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.24s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.24s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "which rsync"
I1029 09:24:20.354099  141231 config.go:182] Loaded profile config "enable-default-cni-588311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestISOImage/Binaries/rsync (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-598598 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-598598 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (50.556773655s)
--- PASS: TestNoKubernetes/serial/Start (50.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-598598 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-598598 "sudo systemctl is-active --quiet service kubelet": exit status 1 (169.425688ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (15.348552507s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-598598
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-598598: (1.333893994s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-598598 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-598598 --driver=kvm2  --container-runtime=crio: (21.3492407s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-598598 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-598598 "sudo systemctl is-active --quiet service kubelet": exit status 1 (179.108608ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.28s)

                                                
                                    
x
+
TestPause/serial/Start (79.76s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-893324 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-893324 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m19.75831333s)
--- PASS: TestPause/serial/Start (79.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (120.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1238113449 start -p stopped-upgrade-317680 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1238113449 start -p stopped-upgrade-317680 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m25.070951944s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1238113449 -p stopped-upgrade-317680 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1238113449 -p stopped-upgrade-317680 stop: (1.94318838s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-317680 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-317680 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (33.572387342s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (120.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (94.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m34.541974851s)
--- PASS: TestNetworkPlugins/group/auto/Start (94.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-317680
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-317680: (1.368504101s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m7.990841696s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m24.915559199s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (100.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m40.904827466s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (100.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-c8m42" [236aadda-b213-43fa-99ee-272eb3331df7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005320448s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-588311 "pgrep -a kubelet"
I1029 09:22:54.812600  141231 config.go:182] Loaded profile config "auto-588311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-588311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dnrnh" [d69e7e72-10b5-45ce-928d-68a25909dd0c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dnrnh" [d69e7e72-10b5-45ce-928d-68a25909dd0c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004141915s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-588311 "pgrep -a kubelet"
I1029 09:23:00.444622  141231 config.go:182] Loaded profile config "kindnet-588311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-588311 replace --force -f testdata/netcat-deployment.yaml
I1029 09:23:01.041323  141231 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1029 09:23:01.042027  141231 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-j6bj7" [94685038-1245-465a-ba73-e692e59daf3d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-j6bj7" [94685038-1245-465a-ba73-e692e59daf3d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.006605104s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-588311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-588311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-588311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-588311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-588311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-588311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (57.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (57.937806765s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (57.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-f26lq" [4faa4880-3355-4383-a676-75880b32d116] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-f26lq" [4faa4880-3355-4383-a676-75880b32d116] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006487473s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (93.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m33.59576947s)
--- PASS: TestNetworkPlugins/group/flannel/Start (93.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-588311 "pgrep -a kubelet"
I1029 09:23:34.135937  141231 config.go:182] Loaded profile config "calico-588311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-588311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qtjgz" [92b0f6f8-4db0-4515-b498-c64482fc0cf4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qtjgz" [92b0f6f8-4db0-4515-b498-c64482fc0cf4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.006984114s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-588311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-588311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-588311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-588311 "pgrep -a kubelet"
I1029 09:23:48.659135  141231 config.go:182] Loaded profile config "custom-flannel-588311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-588311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h7j84" [127302c9-a046-45a2-8744-e967e8a55d8e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1029 09:23:51.151458  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-h7j84" [127302c9-a046-45a2-8744-e967e8a55d8e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005234189s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-588311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (94.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-588311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m34.161642463s)
--- PASS: TestNetworkPlugins/group/bridge/Start (94.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-588311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-588311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-588311 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-588311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hvqtp" [12ef6961-a083-401b-af21-a60c1522424c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hvqtp" [12ef6961-a083-401b-af21-a60c1522424c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003954023s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-887396 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-887396 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.3033595s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-588311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-588311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-588311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (104.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-663852 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-663852 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m44.052421792s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (104.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-vcttg" [b68492b0-b669-4530-b15d-f7dc16acd531] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00473688s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-588311 "pgrep -a kubelet"
I1029 09:25:09.295873  141231 config.go:182] Loaded profile config "flannel-588311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-588311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-45x6c" [644b4ba3-fc15-4a98-99a7-a538c1f95615] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-45x6c" [644b4ba3-fc15-4a98-99a7-a538c1f95615] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004019623s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-588311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-588311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-588311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-887396 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [af551ca8-b198-408d-9d46-19c09cbdd202] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [af551ca8-b198-408d-9d46-19c09cbdd202] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004510903s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-887396 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-887396 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-887396 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.230132025s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-887396 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-588311 "pgrep -a kubelet"
I1029 09:25:36.466310  141231 config.go:182] Loaded profile config "bridge-588311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-588311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jlt4c" [9eb4df4e-24f5-4c37-acbe-ab0d98b28901] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jlt4c" [9eb4df4e-24f5-4c37-acbe-ab0d98b28901] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006311857s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (87.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-887396 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-887396 --alsologtostderr -v=3: (1m27.321799682s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (87.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-790038 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-790038 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.671683044s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-588311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-588311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-588311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-235815 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-235815 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m22.975902677s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-663852 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [147d2acf-7963-40e8-974f-6a99b02d97ff] Pending
helpers_test.go:352: "busybox" [147d2acf-7963-40e8-974f-6a99b02d97ff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [147d2acf-7963-40e8-974f-6a99b02d97ff] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005634698s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-663852 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-663852 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-663852 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (81.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-663852 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-663852 --alsologtostderr -v=3: (1m21.383664773s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (81.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-887396 -n old-k8s-version-887396
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-887396 -n old-k8s-version-887396: exit status 7 (69.776867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-887396 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (42.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-887396 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-887396 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (42.133313313s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-887396 -n old-k8s-version-887396
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (42.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-790038 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [78ee0ca2-c74f-4377-8213-5dc43faa2344] Pending
helpers_test.go:352: "busybox" [78ee0ca2-c74f-4377-8213-5dc43faa2344] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [78ee0ca2-c74f-4377-8213-5dc43faa2344] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00451644s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-790038 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-790038 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-790038 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (76.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-790038 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-790038 --alsologtostderr -v=3: (1m16.686551392s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (76.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-235815 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [39aec0c1-3777-415e-9476-46fabc87ae7e] Pending
helpers_test.go:352: "busybox" [39aec0c1-3777-415e-9476-46fabc87ae7e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [39aec0c1-3777-415e-9476-46fabc87ae7e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.005166429s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-235815 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-235815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-235815 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (90.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-235815 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-235815 --alsologtostderr -v=3: (1m30.675010839s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (90.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lt6lv" [f89a70de-00d9-4d75-8f12-219d8f431b46] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1029 09:27:54.129484  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:54.135955  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:54.147382  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:54.168851  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:54.210305  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:54.291791  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:54.453472  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:54.775274  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:55.066483  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:55.072886  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:55.084275  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:55.105674  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:55.147151  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:55.228706  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:55.393525  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:55.417676  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:55.715286  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:56.357475  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:56.699741  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:27:57.638848  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lt6lv" [f89a70de-00d9-4d75-8f12-219d8f431b46] Running
E1029 09:27:59.261642  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:00.200272  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.003754618s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lt6lv" [f89a70de-00d9-4d75-8f12-219d8f431b46] Running
E1029 09:28:04.383249  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003937618s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-887396 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-663852 -n no-preload-663852
E1029 09:28:05.321737  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-663852 -n no-preload-663852: exit status 7 (63.201758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-663852 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-663852 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-663852 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (54.621054853s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-663852 -n no-preload-663852
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-887396 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-887396 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-887396 -n old-k8s-version-887396
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-887396 -n old-k8s-version-887396: exit status 2 (230.757754ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-887396 -n old-k8s-version-887396
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-887396 -n old-k8s-version-887396: exit status 2 (205.845438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-887396 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-887396 -n old-k8s-version-887396
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-887396 -n old-k8s-version-887396
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (54.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-495633 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1029 09:28:14.625581  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:15.563452  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:27.914463  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/calico-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:27.920860  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/calico-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:27.932254  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/calico-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:27.953648  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/calico-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:27.995125  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/calico-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:28.076762  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/calico-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:28.238375  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/calico-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:28.560689  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/calico-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:29.202474  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/calico-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:30.484593  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/calico-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-495633 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (54.778856322s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (54.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-790038 -n embed-certs-790038
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-790038 -n embed-certs-790038: exit status 7 (64.058435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-790038 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (56.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-790038 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1029 09:28:33.046433  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/calico-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:34.223453  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:35.107512  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:36.045394  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:38.168218  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/calico-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:45.865265  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/addons-131912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:48.410446  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/calico-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:48.922848  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/custom-flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:48.929370  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/custom-flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:48.940906  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/custom-flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:48.962232  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/custom-flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:49.003800  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/custom-flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:49.085729  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/custom-flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:49.247991  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/custom-flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:49.570245  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/custom-flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:50.211929  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/custom-flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:51.151933  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/functional-373499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:51.493818  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/custom-flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:54.055890  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/custom-flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:28:59.177674  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/custom-flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-790038 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (56.19255465s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-790038 -n embed-certs-790038
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (56.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hkqvp" [c82c2ea3-0cb9-4be8-9d1d-f9870ab491e1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hkqvp" [c82c2ea3-0cb9-4be8-9d1d-f9870ab491e1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.00489194s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-495633 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-495633 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.022609962s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (72.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-495633 --alsologtostderr -v=3
E1029 09:29:08.891949  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/calico-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:29:09.419757  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/custom-flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-495633 --alsologtostderr -v=3: (1m12.535007822s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (72.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-235815 -n default-k8s-diff-port-235815
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-235815 -n default-k8s-diff-port-235815: exit status 7 (72.081076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-235815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-235815 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-235815 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (45.043639789s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-235815 -n default-k8s-diff-port-235815
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hkqvp" [c82c2ea3-0cb9-4be8-9d1d-f9870ab491e1] Running
E1029 09:29:16.069213  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:29:17.007743  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003781556s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-663852 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-663852 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-663852 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-663852 -n no-preload-663852
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-663852 -n no-preload-663852: exit status 2 (232.789735ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-663852 -n no-preload-663852
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-663852 -n no-preload-663852: exit status 2 (254.582029ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-663852 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-663852 -n no-preload-663852
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-663852 -n no-preload-663852
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.67s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
E1029 09:29:21.912177  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/enable-default-cni-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.18s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p guest-549168 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.18s)
E1029 09:29:25.755346  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/enable-default-cni-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b9dbx" [dec7ee06-5bda-4338-9f8a-db8cafb6ad27] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1029 09:29:29.902143  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/custom-flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:29:30.877548  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/enable-default-cni-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b9dbx" [dec7ee06-5bda-4338-9f8a-db8cafb6ad27] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004388265s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b9dbx" [dec7ee06-5bda-4338-9f8a-db8cafb6ad27] Running
E1029 09:29:41.119222  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/enable-default-cni-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003332002s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-790038 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-790038 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-790038 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-790038 -n embed-certs-790038
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-790038 -n embed-certs-790038: exit status 2 (211.287909ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-790038 -n embed-certs-790038
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-790038 -n embed-certs-790038: exit status 2 (203.113774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-790038 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-790038 -n embed-certs-790038
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-790038 -n embed-certs-790038
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7ntrj" [67b75524-d59b-401a-9209-dee997defb5e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7ntrj" [67b75524-d59b-401a-9209-dee997defb5e] Running
E1029 09:30:01.601361  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/enable-default-cni-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004067153s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7ntrj" [67b75524-d59b-401a-9209-dee997defb5e] Running
E1029 09:30:03.099268  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:03.105775  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:03.117281  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:03.138831  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:03.180386  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:03.262014  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:03.423680  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:03.745399  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:04.386874  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:05.668312  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004261801s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-235815 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-235815 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-235815 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-235815 -n default-k8s-diff-port-235815
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-235815 -n default-k8s-diff-port-235815: exit status 2 (205.687332ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-235815 -n default-k8s-diff-port-235815
E1029 09:30:08.230260  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-235815 -n default-k8s-diff-port-235815: exit status 2 (211.171174ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-235815 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-235815 -n default-k8s-diff-port-235815
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-235815 -n default-k8s-diff-port-235815
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-495633 -n newest-cni-495633
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-495633 -n newest-cni-495633: exit status 7 (63.37936ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-495633 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-495633 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1029 09:30:23.594180  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:24.638865  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/old-k8s-version-887396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:24.645328  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/old-k8s-version-887396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:24.656832  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/old-k8s-version-887396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:24.678218  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/old-k8s-version-887396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:24.719753  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/old-k8s-version-887396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:24.801249  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/old-k8s-version-887396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:24.962900  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/old-k8s-version-887396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:25.285019  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/old-k8s-version-887396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:25.926441  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/old-k8s-version-887396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:27.208644  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/old-k8s-version-887396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:29.770291  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/old-k8s-version-887396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:34.891891  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/old-k8s-version-887396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:36.753885  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/bridge-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:36.760266  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/bridge-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:36.771742  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/bridge-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:36.793229  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/bridge-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:36.834679  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/bridge-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:36.916705  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/bridge-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:37.078066  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/bridge-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:37.400190  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/bridge-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:37.990516  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/kindnet-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:38.042021  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/bridge-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:38.929949  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/auto-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:39.324761  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/bridge-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:41.886886  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/bridge-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:42.562730  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/enable-default-cni-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:44.075873  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/flannel-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:45.133209  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/old-k8s-version-887396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1029 09:30:47.009256  141231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21800-137232/.minikube/profiles/bridge-588311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-495633 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (31.853373538s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-495633 -n newest-cni-495633
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-495633 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-495633 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-495633 -n newest-cni-495633
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-495633 -n newest-cni-495633: exit status 2 (207.850383ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-495633 -n newest-cni-495633
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-495633 -n newest-cni-495633: exit status 2 (203.224474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-495633 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-495633 -n newest-cni-495633
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-495633 -n newest-cni-495633
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.36s)

                                                
                                    

Test skip (40/343)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.29
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
143 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
147 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 4.19
267 TestNetworkPlugins/group/cilium 5.04
294 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-131912 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-588311 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-588311

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-588311

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-588311

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-588311

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-588311

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-588311

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-588311

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-588311

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-588311

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-588311

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-588311

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-588311" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-588311" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-588311

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-588311"

                                                
                                                
----------------------- debugLogs end: kubenet-588311 [took: 4.006847275s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-588311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-588311
--- SKIP: TestNetworkPlugins/group/kubenet (4.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-588311 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-588311" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-588311

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-588311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-588311"

                                                
                                                
----------------------- debugLogs end: cilium-588311 [took: 4.83804981s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-588311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-588311
--- SKIP: TestNetworkPlugins/group/cilium (5.04s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-331247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-331247
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard