Test Report: KVM_Linux_crio 21504

                    
                      3892f90e7d746f1b37c491f3707229f264f0f5da:2025-09-08:41335
                    
                

Test fail (3/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 167.26
244 TestPreload 160.16
290 TestPause/serial/SecondStartNoReconfiguration 121.98
x
+
TestAddons/parallel/Ingress (167.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-198632 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-198632 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-198632 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [81e6f15d-59f8-4450-a2eb-847b8cb17a16] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [81e6f15d-59f8-4450-a2eb-847b8cb17a16] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 18.003413667s
I0908 16:41:51.275034   11781 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-198632 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.345679744s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-198632 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.229
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-198632 -n addons-198632
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-198632 logs -n 25: (1.386563114s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-217769                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-217769 │ jenkins │ v1.36.0 │ 08 Sep 25 16:37 UTC │ 08 Sep 25 16:37 UTC │
	│ start   │ --download-only -p binary-mirror-853404 --alsologtostderr --binary-mirror http://127.0.0.1:45175 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-853404 │ jenkins │ v1.36.0 │ 08 Sep 25 16:37 UTC │                     │
	│ delete  │ -p binary-mirror-853404                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-853404 │ jenkins │ v1.36.0 │ 08 Sep 25 16:37 UTC │ 08 Sep 25 16:37 UTC │
	│ addons  │ enable dashboard -p addons-198632                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:37 UTC │                     │
	│ addons  │ disable dashboard -p addons-198632                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:37 UTC │                     │
	│ start   │ -p addons-198632 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:37 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ addons-198632 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:40 UTC │ 08 Sep 25 16:40 UTC │
	│ addons  │ addons-198632 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ addons  │ enable headlamp -p addons-198632 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ addons  │ addons-198632 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ addons  │ addons-198632 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ ssh     │ addons-198632 ssh cat /opt/local-path-provisioner/pvc-865d9e29-b122-40d8-9365-422fafd2157b_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ addons  │ addons-198632 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ addons  │ addons-198632 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ ip      │ addons-198632 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ addons  │ addons-198632 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ addons  │ addons-198632 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ addons  │ addons-198632 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-198632                                                                                                                                                                                                                                                                                                                                                                                         │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ addons  │ addons-198632 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ addons  │ addons-198632 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │ 08 Sep 25 16:41 UTC │
	│ ssh     │ addons-198632 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:41 UTC │                     │
	│ addons  │ addons-198632 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:42 UTC │ 08 Sep 25 16:42 UTC │
	│ addons  │ addons-198632 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:42 UTC │ 08 Sep 25 16:42 UTC │
	│ ip      │ addons-198632 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-198632        │ jenkins │ v1.36.0 │ 08 Sep 25 16:44 UTC │ 08 Sep 25 16:44 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 16:37:24
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 16:37:24.307603   12488 out.go:360] Setting OutFile to fd 1 ...
	I0908 16:37:24.307834   12488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:37:24.307842   12488 out.go:374] Setting ErrFile to fd 2...
	I0908 16:37:24.307846   12488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:37:24.308022   12488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
	I0908 16:37:24.308625   12488 out.go:368] Setting JSON to false
	I0908 16:37:24.309368   12488 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1187,"bootTime":1757348257,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 16:37:24.309455   12488 start.go:140] virtualization: kvm guest
	I0908 16:37:24.311520   12488 out.go:179] * [addons-198632] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 16:37:24.312787   12488 notify.go:220] Checking for updates...
	I0908 16:37:24.312792   12488 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 16:37:24.314169   12488 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 16:37:24.315558   12488 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 16:37:24.316918   12488 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 16:37:24.318146   12488 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 16:37:24.319444   12488 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 16:37:24.320777   12488 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 16:37:24.351908   12488 out.go:179] * Using the kvm2 driver based on user configuration
	I0908 16:37:24.353220   12488 start.go:304] selected driver: kvm2
	I0908 16:37:24.353234   12488 start.go:918] validating driver "kvm2" against <nil>
	I0908 16:37:24.353246   12488 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 16:37:24.353938   12488 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 16:37:24.354021   12488 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21504-7629/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 16:37:24.368687   12488 install.go:137] /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 16:37:24.368749   12488 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 16:37:24.368972   12488 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 16:37:24.369001   12488 cni.go:84] Creating CNI manager for ""
	I0908 16:37:24.369041   12488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 16:37:24.369050   12488 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 16:37:24.369114   12488 start.go:348] cluster config:
	{Name:addons-198632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-198632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0908 16:37:24.369204   12488 iso.go:125] acquiring lock: {Name:mkaf49872b434993209a65bf0f93ea3e4c6d93b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 16:37:24.370985   12488 out.go:179] * Starting "addons-198632" primary control-plane node in "addons-198632" cluster
	I0908 16:37:24.372239   12488 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 16:37:24.372286   12488 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 16:37:24.372296   12488 cache.go:58] Caching tarball of preloaded images
	I0908 16:37:24.372384   12488 preload.go:172] Found /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 16:37:24.372398   12488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 16:37:24.372709   12488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/config.json ...
	I0908 16:37:24.372731   12488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/config.json: {Name:mkaf9ba792544e652ff49d9cae6bc86644a01753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:24.372873   12488 start.go:360] acquireMachinesLock for addons-198632: {Name:mka7c3ca4a3e37e9483e7804183d91c6725d32e4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 16:37:24.372942   12488 start.go:364] duration metric: took 52.649µs to acquireMachinesLock for "addons-198632"
	I0908 16:37:24.372966   12488 start.go:93] Provisioning new machine with config: &{Name:addons-198632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:addons-198632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 16:37:24.373027   12488 start.go:125] createHost starting for "" (driver="kvm2")
	I0908 16:37:24.374719   12488 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0908 16:37:24.374835   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:37:24.374885   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:37:24.389435   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I0908 16:37:24.389885   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:37:24.390385   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:37:24.390406   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:37:24.390771   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:37:24.390920   12488 main.go:141] libmachine: (addons-198632) Calling .GetMachineName
	I0908 16:37:24.391055   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:37:24.391149   12488 start.go:159] libmachine.API.Create for "addons-198632" (driver="kvm2")
	I0908 16:37:24.391201   12488 client.go:168] LocalClient.Create starting
	I0908 16:37:24.391244   12488 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem
	I0908 16:37:24.445665   12488 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/cert.pem
	I0908 16:37:24.644302   12488 main.go:141] libmachine: Running pre-create checks...
	I0908 16:37:24.644325   12488 main.go:141] libmachine: (addons-198632) Calling .PreCreateCheck
	I0908 16:37:24.644800   12488 main.go:141] libmachine: (addons-198632) Calling .GetConfigRaw
	I0908 16:37:24.645253   12488 main.go:141] libmachine: Creating machine...
	I0908 16:37:24.645269   12488 main.go:141] libmachine: (addons-198632) Calling .Create
	I0908 16:37:24.645396   12488 main.go:141] libmachine: (addons-198632) creating KVM machine...
	I0908 16:37:24.645424   12488 main.go:141] libmachine: (addons-198632) creating network...
	I0908 16:37:24.646571   12488 main.go:141] libmachine: (addons-198632) DBG | found existing default KVM network
	I0908 16:37:24.647187   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:24.647048   12510 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123560}
	I0908 16:37:24.647235   12488 main.go:141] libmachine: (addons-198632) DBG | created network xml: 
	I0908 16:37:24.647256   12488 main.go:141] libmachine: (addons-198632) DBG | <network>
	I0908 16:37:24.647271   12488 main.go:141] libmachine: (addons-198632) DBG |   <name>mk-addons-198632</name>
	I0908 16:37:24.647279   12488 main.go:141] libmachine: (addons-198632) DBG |   <dns enable='no'/>
	I0908 16:37:24.647301   12488 main.go:141] libmachine: (addons-198632) DBG |   
	I0908 16:37:24.647321   12488 main.go:141] libmachine: (addons-198632) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0908 16:37:24.647340   12488 main.go:141] libmachine: (addons-198632) DBG |     <dhcp>
	I0908 16:37:24.647350   12488 main.go:141] libmachine: (addons-198632) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0908 16:37:24.647397   12488 main.go:141] libmachine: (addons-198632) DBG |     </dhcp>
	I0908 16:37:24.647416   12488 main.go:141] libmachine: (addons-198632) DBG |   </ip>
	I0908 16:37:24.647425   12488 main.go:141] libmachine: (addons-198632) DBG |   
	I0908 16:37:24.647430   12488 main.go:141] libmachine: (addons-198632) DBG | </network>
	I0908 16:37:24.647437   12488 main.go:141] libmachine: (addons-198632) DBG | 
	I0908 16:37:24.652443   12488 main.go:141] libmachine: (addons-198632) DBG | trying to create private KVM network mk-addons-198632 192.168.39.0/24...
	I0908 16:37:24.716741   12488 main.go:141] libmachine: (addons-198632) DBG | private KVM network mk-addons-198632 192.168.39.0/24 created
	I0908 16:37:24.716792   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:24.716727   12510 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 16:37:24.716814   12488 main.go:141] libmachine: (addons-198632) setting up store path in /home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632 ...
	I0908 16:37:24.716832   12488 main.go:141] libmachine: (addons-198632) building disk image from file:///home/jenkins/minikube-integration/21504-7629/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0908 16:37:24.716866   12488 main.go:141] libmachine: (addons-198632) Downloading /home/jenkins/minikube-integration/21504-7629/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21504-7629/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 16:37:25.030931   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:25.030814   12510 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa...
	I0908 16:37:25.316207   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:25.316061   12510 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/addons-198632.rawdisk...
	I0908 16:37:25.316242   12488 main.go:141] libmachine: (addons-198632) DBG | Writing magic tar header
	I0908 16:37:25.316253   12488 main.go:141] libmachine: (addons-198632) DBG | Writing SSH key tar header
	I0908 16:37:25.316260   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:25.316176   12510 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632 ...
	I0908 16:37:25.316335   12488 main.go:141] libmachine: (addons-198632) setting executable bit set on /home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632 (perms=drwx------)
	I0908 16:37:25.316361   12488 main.go:141] libmachine: (addons-198632) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632
	I0908 16:37:25.316372   12488 main.go:141] libmachine: (addons-198632) setting executable bit set on /home/jenkins/minikube-integration/21504-7629/.minikube/machines (perms=drwxr-xr-x)
	I0908 16:37:25.316385   12488 main.go:141] libmachine: (addons-198632) setting executable bit set on /home/jenkins/minikube-integration/21504-7629/.minikube (perms=drwxr-xr-x)
	I0908 16:37:25.316391   12488 main.go:141] libmachine: (addons-198632) setting executable bit set on /home/jenkins/minikube-integration/21504-7629 (perms=drwxrwxr-x)
	I0908 16:37:25.316398   12488 main.go:141] libmachine: (addons-198632) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0908 16:37:25.316406   12488 main.go:141] libmachine: (addons-198632) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0908 16:37:25.316413   12488 main.go:141] libmachine: (addons-198632) creating domain...
	I0908 16:37:25.316423   12488 main.go:141] libmachine: (addons-198632) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21504-7629/.minikube/machines
	I0908 16:37:25.316437   12488 main.go:141] libmachine: (addons-198632) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 16:37:25.316470   12488 main.go:141] libmachine: (addons-198632) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21504-7629
	I0908 16:37:25.316482   12488 main.go:141] libmachine: (addons-198632) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0908 16:37:25.316487   12488 main.go:141] libmachine: (addons-198632) DBG | checking permissions on dir: /home/jenkins
	I0908 16:37:25.316494   12488 main.go:141] libmachine: (addons-198632) DBG | checking permissions on dir: /home
	I0908 16:37:25.316501   12488 main.go:141] libmachine: (addons-198632) DBG | skipping /home - not owner
	I0908 16:37:25.317517   12488 main.go:141] libmachine: (addons-198632) define libvirt domain using xml: 
	I0908 16:37:25.317534   12488 main.go:141] libmachine: (addons-198632) <domain type='kvm'>
	I0908 16:37:25.317563   12488 main.go:141] libmachine: (addons-198632)   <name>addons-198632</name>
	I0908 16:37:25.317570   12488 main.go:141] libmachine: (addons-198632)   <memory unit='MiB'>4096</memory>
	I0908 16:37:25.317579   12488 main.go:141] libmachine: (addons-198632)   <vcpu>2</vcpu>
	I0908 16:37:25.317585   12488 main.go:141] libmachine: (addons-198632)   <features>
	I0908 16:37:25.317595   12488 main.go:141] libmachine: (addons-198632)     <acpi/>
	I0908 16:37:25.317602   12488 main.go:141] libmachine: (addons-198632)     <apic/>
	I0908 16:37:25.317634   12488 main.go:141] libmachine: (addons-198632)     <pae/>
	I0908 16:37:25.317656   12488 main.go:141] libmachine: (addons-198632)     
	I0908 16:37:25.317677   12488 main.go:141] libmachine: (addons-198632)   </features>
	I0908 16:37:25.317692   12488 main.go:141] libmachine: (addons-198632)   <cpu mode='host-passthrough'>
	I0908 16:37:25.317701   12488 main.go:141] libmachine: (addons-198632)   
	I0908 16:37:25.317706   12488 main.go:141] libmachine: (addons-198632)   </cpu>
	I0908 16:37:25.317712   12488 main.go:141] libmachine: (addons-198632)   <os>
	I0908 16:37:25.317716   12488 main.go:141] libmachine: (addons-198632)     <type>hvm</type>
	I0908 16:37:25.317723   12488 main.go:141] libmachine: (addons-198632)     <boot dev='cdrom'/>
	I0908 16:37:25.317727   12488 main.go:141] libmachine: (addons-198632)     <boot dev='hd'/>
	I0908 16:37:25.317735   12488 main.go:141] libmachine: (addons-198632)     <bootmenu enable='no'/>
	I0908 16:37:25.317739   12488 main.go:141] libmachine: (addons-198632)   </os>
	I0908 16:37:25.317743   12488 main.go:141] libmachine: (addons-198632)   <devices>
	I0908 16:37:25.317749   12488 main.go:141] libmachine: (addons-198632)     <disk type='file' device='cdrom'>
	I0908 16:37:25.317759   12488 main.go:141] libmachine: (addons-198632)       <source file='/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/boot2docker.iso'/>
	I0908 16:37:25.317776   12488 main.go:141] libmachine: (addons-198632)       <target dev='hdc' bus='scsi'/>
	I0908 16:37:25.317788   12488 main.go:141] libmachine: (addons-198632)       <readonly/>
	I0908 16:37:25.317797   12488 main.go:141] libmachine: (addons-198632)     </disk>
	I0908 16:37:25.317805   12488 main.go:141] libmachine: (addons-198632)     <disk type='file' device='disk'>
	I0908 16:37:25.317823   12488 main.go:141] libmachine: (addons-198632)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0908 16:37:25.317834   12488 main.go:141] libmachine: (addons-198632)       <source file='/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/addons-198632.rawdisk'/>
	I0908 16:37:25.317841   12488 main.go:141] libmachine: (addons-198632)       <target dev='hda' bus='virtio'/>
	I0908 16:37:25.317848   12488 main.go:141] libmachine: (addons-198632)     </disk>
	I0908 16:37:25.317858   12488 main.go:141] libmachine: (addons-198632)     <interface type='network'>
	I0908 16:37:25.317871   12488 main.go:141] libmachine: (addons-198632)       <source network='mk-addons-198632'/>
	I0908 16:37:25.317884   12488 main.go:141] libmachine: (addons-198632)       <model type='virtio'/>
	I0908 16:37:25.317907   12488 main.go:141] libmachine: (addons-198632)     </interface>
	I0908 16:37:25.317927   12488 main.go:141] libmachine: (addons-198632)     <interface type='network'>
	I0908 16:37:25.317940   12488 main.go:141] libmachine: (addons-198632)       <source network='default'/>
	I0908 16:37:25.317948   12488 main.go:141] libmachine: (addons-198632)       <model type='virtio'/>
	I0908 16:37:25.317960   12488 main.go:141] libmachine: (addons-198632)     </interface>
	I0908 16:37:25.317975   12488 main.go:141] libmachine: (addons-198632)     <serial type='pty'>
	I0908 16:37:25.318004   12488 main.go:141] libmachine: (addons-198632)       <target port='0'/>
	I0908 16:37:25.318025   12488 main.go:141] libmachine: (addons-198632)     </serial>
	I0908 16:37:25.318037   12488 main.go:141] libmachine: (addons-198632)     <console type='pty'>
	I0908 16:37:25.318048   12488 main.go:141] libmachine: (addons-198632)       <target type='serial' port='0'/>
	I0908 16:37:25.318057   12488 main.go:141] libmachine: (addons-198632)     </console>
	I0908 16:37:25.318065   12488 main.go:141] libmachine: (addons-198632)     <rng model='virtio'>
	I0908 16:37:25.318077   12488 main.go:141] libmachine: (addons-198632)       <backend model='random'>/dev/random</backend>
	I0908 16:37:25.318088   12488 main.go:141] libmachine: (addons-198632)     </rng>
	I0908 16:37:25.318096   12488 main.go:141] libmachine: (addons-198632)     
	I0908 16:37:25.318109   12488 main.go:141] libmachine: (addons-198632)     
	I0908 16:37:25.318120   12488 main.go:141] libmachine: (addons-198632)   </devices>
	I0908 16:37:25.318126   12488 main.go:141] libmachine: (addons-198632) </domain>
	I0908 16:37:25.318139   12488 main.go:141] libmachine: (addons-198632) 
	I0908 16:37:25.323600   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:47:6d:48 in network default
	I0908 16:37:25.324124   12488 main.go:141] libmachine: (addons-198632) starting domain...
	I0908 16:37:25.324158   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:25.324167   12488 main.go:141] libmachine: (addons-198632) ensuring networks are active...
	I0908 16:37:25.324801   12488 main.go:141] libmachine: (addons-198632) Ensuring network default is active
	I0908 16:37:25.325127   12488 main.go:141] libmachine: (addons-198632) Ensuring network mk-addons-198632 is active
	I0908 16:37:25.325704   12488 main.go:141] libmachine: (addons-198632) getting domain XML...
	I0908 16:37:25.326440   12488 main.go:141] libmachine: (addons-198632) creating domain...
	I0908 16:37:26.714240   12488 main.go:141] libmachine: (addons-198632) waiting for IP...
	I0908 16:37:26.715099   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:26.715594   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find current IP address of domain addons-198632 in network mk-addons-198632
	I0908 16:37:26.715615   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:26.715554   12510 retry.go:31] will retry after 204.796787ms: waiting for domain to come up
	I0908 16:37:26.922079   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:26.922673   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find current IP address of domain addons-198632 in network mk-addons-198632
	I0908 16:37:26.922698   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:26.922608   12510 retry.go:31] will retry after 242.267384ms: waiting for domain to come up
	I0908 16:37:27.166057   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:27.166418   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find current IP address of domain addons-198632 in network mk-addons-198632
	I0908 16:37:27.166441   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:27.166395   12510 retry.go:31] will retry after 324.897702ms: waiting for domain to come up
	I0908 16:37:27.492872   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:27.493387   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find current IP address of domain addons-198632 in network mk-addons-198632
	I0908 16:37:27.493432   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:27.493348   12510 retry.go:31] will retry after 458.379157ms: waiting for domain to come up
	I0908 16:37:27.952963   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:27.953407   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find current IP address of domain addons-198632 in network mk-addons-198632
	I0908 16:37:27.953443   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:27.953397   12510 retry.go:31] will retry after 747.105557ms: waiting for domain to come up
	I0908 16:37:28.702356   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:28.702806   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find current IP address of domain addons-198632 in network mk-addons-198632
	I0908 16:37:28.702836   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:28.702760   12510 retry.go:31] will retry after 576.297257ms: waiting for domain to come up
	I0908 16:37:29.280483   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:29.280909   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find current IP address of domain addons-198632 in network mk-addons-198632
	I0908 16:37:29.280937   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:29.280871   12510 retry.go:31] will retry after 1.085017714s: waiting for domain to come up
	I0908 16:37:30.367679   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:30.368028   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find current IP address of domain addons-198632 in network mk-addons-198632
	I0908 16:37:30.368070   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:30.368023   12510 retry.go:31] will retry after 972.128524ms: waiting for domain to come up
	I0908 16:37:31.342420   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:31.342892   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find current IP address of domain addons-198632 in network mk-addons-198632
	I0908 16:37:31.342920   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:31.342871   12510 retry.go:31] will retry after 1.45083481s: waiting for domain to come up
	I0908 16:37:32.795648   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:32.796111   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find current IP address of domain addons-198632 in network mk-addons-198632
	I0908 16:37:32.796141   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:32.796088   12510 retry.go:31] will retry after 1.874359158s: waiting for domain to come up
	I0908 16:37:34.671873   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:34.672413   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find current IP address of domain addons-198632 in network mk-addons-198632
	I0908 16:37:34.672494   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:34.672358   12510 retry.go:31] will retry after 2.812560911s: waiting for domain to come up
	I0908 16:37:37.488546   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:37.488995   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find current IP address of domain addons-198632 in network mk-addons-198632
	I0908 16:37:37.489029   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:37.488958   12510 retry.go:31] will retry after 3.461467228s: waiting for domain to come up
	I0908 16:37:40.953186   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:40.953589   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find current IP address of domain addons-198632 in network mk-addons-198632
	I0908 16:37:40.953620   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:40.953556   12510 retry.go:31] will retry after 2.852262922s: waiting for domain to come up
	I0908 16:37:43.809655   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:43.810140   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find current IP address of domain addons-198632 in network mk-addons-198632
	I0908 16:37:43.810157   12488 main.go:141] libmachine: (addons-198632) DBG | I0908 16:37:43.810114   12510 retry.go:31] will retry after 4.050433091s: waiting for domain to come up
	I0908 16:37:47.864810   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:47.865336   12488 main.go:141] libmachine: (addons-198632) found domain IP: 192.168.39.229
	I0908 16:37:47.865357   12488 main.go:141] libmachine: (addons-198632) reserving static IP address...
	I0908 16:37:47.865366   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has current primary IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:47.865831   12488 main.go:141] libmachine: (addons-198632) DBG | unable to find host DHCP lease matching {name: "addons-198632", mac: "52:54:00:8f:1c:56", ip: "192.168.39.229"} in network mk-addons-198632
	I0908 16:37:47.939028   12488 main.go:141] libmachine: (addons-198632) reserved static IP address 192.168.39.229 for domain addons-198632
	I0908 16:37:47.939054   12488 main.go:141] libmachine: (addons-198632) DBG | Getting to WaitForSSH function...
	I0908 16:37:47.939063   12488 main.go:141] libmachine: (addons-198632) waiting for SSH...
	I0908 16:37:47.941588   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:47.942039   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:47.942056   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:47.942148   12488 main.go:141] libmachine: (addons-198632) DBG | Using SSH client type: external
	I0908 16:37:47.942175   12488 main.go:141] libmachine: (addons-198632) DBG | Using SSH private key: /home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa (-rw-------)
	I0908 16:37:47.942302   12488 main.go:141] libmachine: (addons-198632) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 16:37:47.942372   12488 main.go:141] libmachine: (addons-198632) DBG | About to run SSH command:
	I0908 16:37:47.942389   12488 main.go:141] libmachine: (addons-198632) DBG | exit 0
	I0908 16:37:48.074882   12488 main.go:141] libmachine: (addons-198632) DBG | SSH cmd err, output: <nil>: 
	I0908 16:37:48.075150   12488 main.go:141] libmachine: (addons-198632) KVM machine creation complete
	I0908 16:37:48.075438   12488 main.go:141] libmachine: (addons-198632) Calling .GetConfigRaw
	I0908 16:37:48.075962   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:37:48.076179   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:37:48.076306   12488 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0908 16:37:48.076320   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:37:48.077572   12488 main.go:141] libmachine: Detecting operating system of created instance...
	I0908 16:37:48.077585   12488 main.go:141] libmachine: Waiting for SSH to be available...
	I0908 16:37:48.077589   12488 main.go:141] libmachine: Getting to WaitForSSH function...
	I0908 16:37:48.077594   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:37:48.079569   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:48.079872   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:48.079901   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:48.079973   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:37:48.080131   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:48.080282   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:48.080411   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:37:48.080558   12488 main.go:141] libmachine: Using SSH client type: native
	I0908 16:37:48.080768   12488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0908 16:37:48.080780   12488 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0908 16:37:48.186462   12488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 16:37:48.186481   12488 main.go:141] libmachine: Detecting the provisioner...
	I0908 16:37:48.186501   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:37:48.189525   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:48.189815   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:48.189834   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:48.189950   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:37:48.190155   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:48.190307   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:48.190458   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:37:48.190679   12488 main.go:141] libmachine: Using SSH client type: native
	I0908 16:37:48.190876   12488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0908 16:37:48.190888   12488 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0908 16:37:48.300415   12488 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0908 16:37:48.300505   12488 main.go:141] libmachine: found compatible host: buildroot
	I0908 16:37:48.300514   12488 main.go:141] libmachine: Provisioning with buildroot...
	I0908 16:37:48.300521   12488 main.go:141] libmachine: (addons-198632) Calling .GetMachineName
	I0908 16:37:48.300733   12488 buildroot.go:166] provisioning hostname "addons-198632"
	I0908 16:37:48.300758   12488 main.go:141] libmachine: (addons-198632) Calling .GetMachineName
	I0908 16:37:48.300901   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:37:48.303550   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:48.303892   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:48.303912   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:48.304071   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:37:48.304221   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:48.304366   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:48.304536   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:37:48.304747   12488 main.go:141] libmachine: Using SSH client type: native
	I0908 16:37:48.304948   12488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0908 16:37:48.304960   12488 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-198632 && echo "addons-198632" | sudo tee /etc/hostname
	I0908 16:37:48.433250   12488 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-198632
	
	I0908 16:37:48.433276   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:37:48.436837   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:48.437280   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:48.437302   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:48.437479   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:37:48.437627   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:48.437763   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:48.437867   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:37:48.437985   12488 main.go:141] libmachine: Using SSH client type: native
	I0908 16:37:48.438172   12488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0908 16:37:48.438187   12488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-198632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-198632/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-198632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 16:37:48.558770   12488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 16:37:48.558796   12488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21504-7629/.minikube CaCertPath:/home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21504-7629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21504-7629/.minikube}
	I0908 16:37:48.558826   12488 buildroot.go:174] setting up certificates
	I0908 16:37:48.558838   12488 provision.go:84] configureAuth start
	I0908 16:37:48.558847   12488 main.go:141] libmachine: (addons-198632) Calling .GetMachineName
	I0908 16:37:48.559112   12488 main.go:141] libmachine: (addons-198632) Calling .GetIP
	I0908 16:37:48.561686   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:48.562073   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:48.562091   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:48.562311   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:37:48.564590   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:48.564875   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:48.564894   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:48.565041   12488 provision.go:143] copyHostCerts
	I0908 16:37:48.565120   12488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21504-7629/.minikube/key.pem (1679 bytes)
	I0908 16:37:48.565282   12488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21504-7629/.minikube/ca.pem (1078 bytes)
	I0908 16:37:48.565394   12488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21504-7629/.minikube/cert.pem (1123 bytes)
	I0908 16:37:48.565462   12488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21504-7629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca-key.pem org=jenkins.addons-198632 san=[127.0.0.1 192.168.39.229 addons-198632 localhost minikube]
	I0908 16:37:49.232201   12488 provision.go:177] copyRemoteCerts
	I0908 16:37:49.232265   12488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 16:37:49.232286   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:37:49.234858   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.235129   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:49.235163   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.235320   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:37:49.235506   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:49.235637   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:37:49.235749   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:37:49.323942   12488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 16:37:49.354757   12488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 16:37:49.384782   12488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 16:37:49.414879   12488 provision.go:87] duration metric: took 856.024957ms to configureAuth
	I0908 16:37:49.414910   12488 buildroot.go:189] setting minikube options for container-runtime
	I0908 16:37:49.415148   12488 config.go:182] Loaded profile config "addons-198632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 16:37:49.415246   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:37:49.417913   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.418201   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:49.418237   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.418413   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:37:49.418587   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:49.418717   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:49.418832   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:37:49.418981   12488 main.go:141] libmachine: Using SSH client type: native
	I0908 16:37:49.419165   12488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0908 16:37:49.419180   12488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 16:37:49.666090   12488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 16:37:49.666118   12488 main.go:141] libmachine: Checking connection to Docker...
	I0908 16:37:49.666129   12488 main.go:141] libmachine: (addons-198632) Calling .GetURL
	I0908 16:37:49.667359   12488 main.go:141] libmachine: (addons-198632) DBG | using libvirt version 6000000
	I0908 16:37:49.669512   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.669799   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:49.669844   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.670015   12488 main.go:141] libmachine: Docker is up and running!
	I0908 16:37:49.670032   12488 main.go:141] libmachine: Reticulating splines...
	I0908 16:37:49.670040   12488 client.go:171] duration metric: took 25.278827934s to LocalClient.Create
	I0908 16:37:49.670063   12488 start.go:167] duration metric: took 25.278914658s to libmachine.API.Create "addons-198632"
	I0908 16:37:49.670073   12488 start.go:293] postStartSetup for "addons-198632" (driver="kvm2")
	I0908 16:37:49.670081   12488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 16:37:49.670097   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:37:49.670294   12488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 16:37:49.670313   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:37:49.672491   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.672823   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:49.672842   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.672984   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:37:49.673249   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:49.673382   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:37:49.673506   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:37:49.762677   12488 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 16:37:49.767854   12488 info.go:137] Remote host: Buildroot 2025.02
	I0908 16:37:49.767880   12488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-7629/.minikube/addons for local assets ...
	I0908 16:37:49.767944   12488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-7629/.minikube/files for local assets ...
	I0908 16:37:49.767966   12488 start.go:296] duration metric: took 97.887822ms for postStartSetup
	I0908 16:37:49.767995   12488 main.go:141] libmachine: (addons-198632) Calling .GetConfigRaw
	I0908 16:37:49.768625   12488 main.go:141] libmachine: (addons-198632) Calling .GetIP
	I0908 16:37:49.771204   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.771577   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:49.771606   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.771803   12488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/config.json ...
	I0908 16:37:49.771982   12488 start.go:128] duration metric: took 25.398945699s to createHost
	I0908 16:37:49.772002   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:37:49.774089   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.774425   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:49.774453   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.774569   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:37:49.774775   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:49.774953   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:49.775058   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:37:49.775200   12488 main.go:141] libmachine: Using SSH client type: native
	I0908 16:37:49.775428   12488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0908 16:37:49.775444   12488 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 16:37:49.884442   12488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757349469.857311922
	
	I0908 16:37:49.884465   12488 fix.go:216] guest clock: 1757349469.857311922
	I0908 16:37:49.884475   12488 fix.go:229] Guest: 2025-09-08 16:37:49.857311922 +0000 UTC Remote: 2025-09-08 16:37:49.771993252 +0000 UTC m=+25.499319801 (delta=85.31867ms)
	I0908 16:37:49.884533   12488 fix.go:200] guest clock delta is within tolerance: 85.31867ms
	I0908 16:37:49.884544   12488 start.go:83] releasing machines lock for "addons-198632", held for 25.511588899s
	I0908 16:37:49.884576   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:37:49.884858   12488 main.go:141] libmachine: (addons-198632) Calling .GetIP
	I0908 16:37:49.887577   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.887912   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:49.887938   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.888094   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:37:49.888572   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:37:49.888735   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:37:49.888832   12488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 16:37:49.888879   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:37:49.888921   12488 ssh_runner.go:195] Run: cat /version.json
	I0908 16:37:49.888948   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:37:49.891427   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.891745   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.891799   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:49.891818   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.891949   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:37:49.892080   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:49.892101   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:49.892105   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:49.892267   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:37:49.892269   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:37:49.892452   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:37:49.892460   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:37:49.892571   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:37:49.892695   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:37:49.998633   12488 ssh_runner.go:195] Run: systemctl --version
	I0908 16:37:50.005467   12488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 16:37:50.164151   12488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 16:37:50.171293   12488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 16:37:50.171363   12488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 16:37:50.191949   12488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 16:37:50.191979   12488 start.go:495] detecting cgroup driver to use...
	I0908 16:37:50.192054   12488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 16:37:50.213393   12488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 16:37:50.230918   12488 docker.go:218] disabling cri-docker service (if available) ...
	I0908 16:37:50.230981   12488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 16:37:50.247316   12488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 16:37:50.263481   12488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 16:37:50.403849   12488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 16:37:50.550297   12488 docker.go:234] disabling docker service ...
	I0908 16:37:50.550368   12488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 16:37:50.569521   12488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 16:37:50.584606   12488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 16:37:50.797190   12488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 16:37:50.940650   12488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 16:37:50.957420   12488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 16:37:50.980141   12488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 16:37:50.980206   12488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 16:37:50.993419   12488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 16:37:50.993500   12488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 16:37:51.006848   12488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 16:37:51.020025   12488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 16:37:51.032851   12488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 16:37:51.046157   12488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 16:37:51.058663   12488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 16:37:51.079181   12488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 16:37:51.091777   12488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 16:37:51.102260   12488 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 16:37:51.102326   12488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 16:37:51.122096   12488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 16:37:51.133952   12488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 16:37:51.272832   12488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 16:37:51.388116   12488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 16:37:51.388220   12488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 16:37:51.393655   12488 start.go:563] Will wait 60s for crictl version
	I0908 16:37:51.393729   12488 ssh_runner.go:195] Run: which crictl
	I0908 16:37:51.398019   12488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 16:37:51.440273   12488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 16:37:51.440388   12488 ssh_runner.go:195] Run: crio --version
	I0908 16:37:51.469199   12488 ssh_runner.go:195] Run: crio --version
	I0908 16:37:51.500745   12488 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0908 16:37:51.502053   12488 main.go:141] libmachine: (addons-198632) Calling .GetIP
	I0908 16:37:51.504478   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:51.504811   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:37:51.504837   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:37:51.505044   12488 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0908 16:37:51.509589   12488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 16:37:51.524704   12488 kubeadm.go:875] updating cluster {Name:addons-198632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
0 ClusterName:addons-198632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 16:37:51.524796   12488 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 16:37:51.524844   12488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 16:37:51.563225   12488 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0908 16:37:51.563281   12488 ssh_runner.go:195] Run: which lz4
	I0908 16:37:51.567842   12488 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 16:37:51.572900   12488 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 16:37:51.572923   12488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0908 16:37:53.041403   12488 crio.go:462] duration metric: took 1.473592661s to copy over tarball
	I0908 16:37:53.041472   12488 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0908 16:37:54.741419   12488 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.699919412s)
	I0908 16:37:54.741460   12488 crio.go:469] duration metric: took 1.700025274s to extract the tarball
	I0908 16:37:54.741470   12488 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 16:37:54.782564   12488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 16:37:54.829007   12488 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 16:37:54.829044   12488 cache_images.go:85] Images are preloaded, skipping loading
	I0908 16:37:54.829054   12488 kubeadm.go:926] updating node { 192.168.39.229 8443 v1.34.0 crio true true} ...
	I0908 16:37:54.829147   12488 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-198632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-198632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 16:37:54.829211   12488 ssh_runner.go:195] Run: crio config
	I0908 16:37:54.879665   12488 cni.go:84] Creating CNI manager for ""
	I0908 16:37:54.879688   12488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 16:37:54.879700   12488 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 16:37:54.879719   12488 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.229 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-198632 NodeName:addons-198632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 16:37:54.879826   12488 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-198632"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.229"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 16:37:54.879885   12488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 16:37:54.892576   12488 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 16:37:54.892659   12488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 16:37:54.904735   12488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0908 16:37:54.924875   12488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 16:37:54.944758   12488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0908 16:37:54.965579   12488 ssh_runner.go:195] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0908 16:37:54.970072   12488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 16:37:54.985042   12488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 16:37:55.130409   12488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 16:37:55.151369   12488 certs.go:68] Setting up /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632 for IP: 192.168.39.229
	I0908 16:37:55.151397   12488 certs.go:194] generating shared ca certs ...
	I0908 16:37:55.151421   12488 certs.go:226] acquiring lock for ca certs: {Name:mk97fb352a8636fddbcae5a6f40efc0f573cd949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:55.151600   12488 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21504-7629/.minikube/ca.key
	I0908 16:37:55.374943   12488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-7629/.minikube/ca.crt ...
	I0908 16:37:55.374972   12488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/.minikube/ca.crt: {Name:mkd260112924d58135de7099620e32e1dc32a254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:55.375176   12488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-7629/.minikube/ca.key ...
	I0908 16:37:55.375190   12488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/.minikube/ca.key: {Name:mk7c2ab157623007658b791be81737413bedf674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:55.375295   12488 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21504-7629/.minikube/proxy-client-ca.key
	I0908 16:37:55.431705   12488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-7629/.minikube/proxy-client-ca.crt ...
	I0908 16:37:55.431731   12488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/.minikube/proxy-client-ca.crt: {Name:mk6cdadc4ac55c1f1f8a94fe7a0e0e52192bd85d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:55.431909   12488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-7629/.minikube/proxy-client-ca.key ...
	I0908 16:37:55.431923   12488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/.minikube/proxy-client-ca.key: {Name:mk632e3d359bae9a73be712391d74693b95f32db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:55.432015   12488 certs.go:256] generating profile certs ...
	I0908 16:37:55.432085   12488 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.key
	I0908 16:37:55.432105   12488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt with IP's: []
	I0908 16:37:55.676395   12488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt ...
	I0908 16:37:55.676549   12488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: {Name:mk9d76f5a09a92c62c1fb8bc8d7cf0907b8c8bf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:55.676738   12488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.key ...
	I0908 16:37:55.676752   12488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.key: {Name:mk1f3d675d78be3328149f20bd58bf30640e0f31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:55.676856   12488 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/apiserver.key.1e5df1a5
	I0908 16:37:55.676880   12488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/apiserver.crt.1e5df1a5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.229]
	I0908 16:37:56.198144   12488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/apiserver.crt.1e5df1a5 ...
	I0908 16:37:56.198176   12488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/apiserver.crt.1e5df1a5: {Name:mkcdd26f241913ed858e45965d2f5b0ff4b2f5f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:56.198385   12488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/apiserver.key.1e5df1a5 ...
	I0908 16:37:56.198403   12488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/apiserver.key.1e5df1a5: {Name:mke05a1aef6c66318ab1aab6955b7d25f8f94f12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:56.198508   12488 certs.go:381] copying /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/apiserver.crt.1e5df1a5 -> /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/apiserver.crt
	I0908 16:37:56.198614   12488 certs.go:385] copying /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/apiserver.key.1e5df1a5 -> /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/apiserver.key
	I0908 16:37:56.198714   12488 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/proxy-client.key
	I0908 16:37:56.198739   12488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/proxy-client.crt with IP's: []
	I0908 16:37:56.739252   12488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/proxy-client.crt ...
	I0908 16:37:56.739281   12488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/proxy-client.crt: {Name:mkc1082f5b74295c2cc117f9f642ca3c5669855f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:56.739439   12488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/proxy-client.key ...
	I0908 16:37:56.739449   12488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/proxy-client.key: {Name:mkcde2998b89c4c42371d9343c0441f06670852d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:56.739609   12488 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca-key.pem (1671 bytes)
	I0908 16:37:56.739649   12488 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem (1078 bytes)
	I0908 16:37:56.739672   12488 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/cert.pem (1123 bytes)
	I0908 16:37:56.739697   12488 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/key.pem (1679 bytes)
	I0908 16:37:56.740266   12488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 16:37:56.788680   12488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 16:37:56.833013   12488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 16:37:56.866353   12488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 16:37:56.896939   12488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 16:37:56.927593   12488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 16:37:56.959928   12488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 16:37:56.991144   12488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 16:37:57.021179   12488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 16:37:57.050944   12488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 16:37:57.072323   12488 ssh_runner.go:195] Run: openssl version
	I0908 16:37:57.078796   12488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 16:37:57.092332   12488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 16:37:57.097628   12488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 16:37:57.097675   12488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 16:37:57.104889   12488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 16:37:57.118620   12488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 16:37:57.123446   12488 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 16:37:57.123492   12488 kubeadm.go:392] StartCluster: {Name:addons-198632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 C
lusterName:addons-198632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 16:37:57.123567   12488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 16:37:57.123632   12488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 16:37:57.164429   12488 cri.go:89] found id: ""
	I0908 16:37:57.164492   12488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 16:37:57.176851   12488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 16:37:57.189492   12488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 16:37:57.202180   12488 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 16:37:57.202210   12488 kubeadm.go:157] found existing configuration files:
	
	I0908 16:37:57.202264   12488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 16:37:57.213765   12488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 16:37:57.213838   12488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 16:37:57.225831   12488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 16:37:57.237065   12488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 16:37:57.237146   12488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 16:37:57.249676   12488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 16:37:57.261026   12488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 16:37:57.261079   12488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 16:37:57.273271   12488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 16:37:57.284948   12488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 16:37:57.285019   12488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 16:37:57.297419   12488 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0908 16:37:57.464418   12488 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 16:38:10.023068   12488 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 16:38:10.023176   12488 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 16:38:10.023251   12488 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 16:38:10.023356   12488 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 16:38:10.023461   12488 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 16:38:10.023536   12488 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 16:38:10.025326   12488 out.go:252]   - Generating certificates and keys ...
	I0908 16:38:10.025419   12488 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 16:38:10.025516   12488 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 16:38:10.025609   12488 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 16:38:10.025686   12488 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 16:38:10.025777   12488 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 16:38:10.025857   12488 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 16:38:10.025939   12488 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 16:38:10.026127   12488 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-198632 localhost] and IPs [192.168.39.229 127.0.0.1 ::1]
	I0908 16:38:10.026184   12488 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 16:38:10.026319   12488 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-198632 localhost] and IPs [192.168.39.229 127.0.0.1 ::1]
	I0908 16:38:10.026393   12488 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 16:38:10.026486   12488 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 16:38:10.026545   12488 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 16:38:10.026641   12488 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 16:38:10.026735   12488 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 16:38:10.026832   12488 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 16:38:10.026915   12488 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 16:38:10.027000   12488 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 16:38:10.027051   12488 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 16:38:10.027120   12488 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 16:38:10.027183   12488 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 16:38:10.028851   12488 out.go:252]   - Booting up control plane ...
	I0908 16:38:10.028936   12488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 16:38:10.029004   12488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 16:38:10.029063   12488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 16:38:10.029148   12488 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 16:38:10.029244   12488 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 16:38:10.029361   12488 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 16:38:10.029437   12488 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 16:38:10.029472   12488 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 16:38:10.029609   12488 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 16:38:10.029708   12488 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 16:38:10.029765   12488 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00665544s
	I0908 16:38:10.029862   12488 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 16:38:10.029962   12488 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.229:8443/livez
	I0908 16:38:10.030097   12488 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 16:38:10.030187   12488 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 16:38:10.030273   12488 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.123216836s
	I0908 16:38:10.030357   12488 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.603653895s
	I0908 16:38:10.030413   12488 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.501813005s
	I0908 16:38:10.030511   12488 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 16:38:10.030621   12488 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 16:38:10.030697   12488 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 16:38:10.030851   12488 kubeadm.go:310] [mark-control-plane] Marking the node addons-198632 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 16:38:10.030915   12488 kubeadm.go:310] [bootstrap-token] Using token: jf93u0.89a9kadvn2wjsfds
	I0908 16:38:10.033444   12488 out.go:252]   - Configuring RBAC rules ...
	I0908 16:38:10.033569   12488 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 16:38:10.033649   12488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 16:38:10.033800   12488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 16:38:10.033930   12488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 16:38:10.034027   12488 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 16:38:10.034107   12488 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 16:38:10.034209   12488 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 16:38:10.034248   12488 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 16:38:10.034289   12488 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 16:38:10.034295   12488 kubeadm.go:310] 
	I0908 16:38:10.034346   12488 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 16:38:10.034354   12488 kubeadm.go:310] 
	I0908 16:38:10.034426   12488 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 16:38:10.034432   12488 kubeadm.go:310] 
	I0908 16:38:10.034454   12488 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 16:38:10.034509   12488 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 16:38:10.034566   12488 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 16:38:10.034572   12488 kubeadm.go:310] 
	I0908 16:38:10.034617   12488 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 16:38:10.034622   12488 kubeadm.go:310] 
	I0908 16:38:10.034718   12488 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 16:38:10.034735   12488 kubeadm.go:310] 
	I0908 16:38:10.034790   12488 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 16:38:10.034855   12488 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 16:38:10.034914   12488 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 16:38:10.034919   12488 kubeadm.go:310] 
	I0908 16:38:10.034988   12488 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 16:38:10.035054   12488 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 16:38:10.035060   12488 kubeadm.go:310] 
	I0908 16:38:10.035157   12488 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jf93u0.89a9kadvn2wjsfds \
	I0908 16:38:10.035311   12488 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d2ce3293c8af8e32c36b3cd46a9558007f7638885b071193091a795ed80a03ad \
	I0908 16:38:10.035345   12488 kubeadm.go:310] 	--control-plane 
	I0908 16:38:10.035352   12488 kubeadm.go:310] 
	I0908 16:38:10.035462   12488 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 16:38:10.035472   12488 kubeadm.go:310] 
	I0908 16:38:10.035587   12488 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jf93u0.89a9kadvn2wjsfds \
	I0908 16:38:10.035752   12488 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d2ce3293c8af8e32c36b3cd46a9558007f7638885b071193091a795ed80a03ad 
	I0908 16:38:10.035765   12488 cni.go:84] Creating CNI manager for ""
	I0908 16:38:10.035777   12488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 16:38:10.037262   12488 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 16:38:10.038431   12488 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 16:38:10.056216   12488 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 16:38:10.081274   12488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 16:38:10.081435   12488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:38:10.081444   12488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-198632 minikube.k8s.io/updated_at=2025_09_08T16_38_10_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6 minikube.k8s.io/name=addons-198632 minikube.k8s.io/primary=true
	I0908 16:38:10.216420   12488 ops.go:34] apiserver oom_adj: -16
	I0908 16:38:10.216430   12488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:38:10.717109   12488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:38:11.217475   12488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:38:11.717087   12488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:38:12.217352   12488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:38:12.716562   12488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:38:13.216602   12488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:38:13.717567   12488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:38:14.216651   12488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:38:14.717128   12488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 16:38:14.850358   12488 kubeadm.go:1105] duration metric: took 4.769008658s to wait for elevateKubeSystemPrivileges
	I0908 16:38:14.850391   12488 kubeadm.go:394] duration metric: took 17.726902976s to StartCluster
	I0908 16:38:14.850408   12488 settings.go:142] acquiring lock: {Name:mk1c22e0fe8486f74cbd8991c9b3bb6f4c36c978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:38:14.850513   12488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 16:38:14.850928   12488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/kubeconfig: {Name:mkb59774845ad4e65ea2ac11e21880c504ffe601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:38:14.851105   12488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 16:38:14.851125   12488 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 16:38:14.851168   12488 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0908 16:38:14.851283   12488 addons.go:69] Setting yakd=true in profile "addons-198632"
	I0908 16:38:14.851288   12488 addons.go:69] Setting ingress-dns=true in profile "addons-198632"
	I0908 16:38:14.851309   12488 addons.go:238] Setting addon ingress-dns=true in "addons-198632"
	I0908 16:38:14.851314   12488 addons.go:69] Setting inspektor-gadget=true in profile "addons-198632"
	I0908 16:38:14.851336   12488 addons.go:238] Setting addon inspektor-gadget=true in "addons-198632"
	I0908 16:38:14.851371   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.851382   12488 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-198632"
	I0908 16:38:14.851389   12488 addons.go:69] Setting storage-provisioner=true in profile "addons-198632"
	I0908 16:38:14.851390   12488 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-198632"
	I0908 16:38:14.851404   12488 addons.go:238] Setting addon storage-provisioner=true in "addons-198632"
	I0908 16:38:14.851407   12488 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-198632"
	I0908 16:38:14.851421   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.851426   12488 addons.go:69] Setting cloud-spanner=true in profile "addons-198632"
	I0908 16:38:14.851413   12488 addons.go:69] Setting gcp-auth=true in profile "addons-198632"
	I0908 16:38:14.851434   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.851417   12488 addons.go:69] Setting ingress=true in profile "addons-198632"
	I0908 16:38:14.851441   12488 addons.go:238] Setting addon cloud-spanner=true in "addons-198632"
	I0908 16:38:14.851451   12488 mustload.go:65] Loading cluster: addons-198632
	I0908 16:38:14.851464   12488 addons.go:238] Setting addon ingress=true in "addons-198632"
	I0908 16:38:14.851498   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.851515   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.851694   12488 config.go:182] Loaded profile config "addons-198632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 16:38:14.851369   12488 addons.go:69] Setting registry-creds=true in profile "addons-198632"
	I0908 16:38:14.851856   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.851860   12488 addons.go:238] Setting addon registry-creds=true in "addons-198632"
	I0908 16:38:14.851873   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.851880   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.851887   12488 addons.go:69] Setting registry=true in profile "addons-198632"
	I0908 16:38:14.851891   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.851901   12488 addons.go:238] Setting addon registry=true in "addons-198632"
	I0908 16:38:14.851920   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.851965   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.852007   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.852014   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.852027   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.852044   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.852098   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.852204   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.852224   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.852261   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.852278   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.851372   12488 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-198632"
	I0908 16:38:14.852415   12488 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-198632"
	I0908 16:38:14.852443   12488 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-198632"
	I0908 16:38:14.851353   12488 config.go:182] Loaded profile config "addons-198632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 16:38:14.851309   12488 addons.go:238] Setting addon yakd=true in "addons-198632"
	I0908 16:38:14.852746   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.852845   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.852866   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.852889   12488 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-198632"
	I0908 16:38:14.851377   12488 addons.go:69] Setting metrics-server=true in profile "addons-198632"
	I0908 16:38:14.852916   12488 addons.go:238] Setting addon metrics-server=true in "addons-198632"
	I0908 16:38:14.852932   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.853138   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.853163   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.851860   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.853218   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.851879   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.853308   12488 addons.go:69] Setting default-storageclass=true in profile "addons-198632"
	I0908 16:38:14.853323   12488 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-198632"
	I0908 16:38:14.853728   12488 addons.go:69] Setting volumesnapshots=true in profile "addons-198632"
	I0908 16:38:14.853755   12488 addons.go:238] Setting addon volumesnapshots=true in "addons-198632"
	I0908 16:38:14.853778   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.853850   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.854153   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.851383   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.854181   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.854220   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.854239   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.854268   12488 addons.go:69] Setting volcano=true in profile "addons-198632"
	I0908 16:38:14.854278   12488 addons.go:238] Setting addon volcano=true in "addons-198632"
	I0908 16:38:14.851420   12488 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-198632"
	I0908 16:38:14.854413   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.854506   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.854521   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.854581   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.853292   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.854790   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.856212   12488 out.go:179] * Verifying Kubernetes components...
	I0908 16:38:14.880094   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.880348   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.880426   12488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 16:38:14.882537   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.882561   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.882923   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I0908 16:38:14.883030   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34467
	I0908 16:38:14.883090   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33979
	I0908 16:38:14.883171   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39223
	I0908 16:38:14.883184   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0908 16:38:14.883422   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0908 16:38:14.883443   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42061
	I0908 16:38:14.883543   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.883880   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.884001   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.884055   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.884071   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.884159   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.884272   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.884288   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.884348   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.884361   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.884504   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.884544   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.885386   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.885406   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.885727   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.885748   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.885765   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.885780   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.885810   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.885831   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.886404   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.886466   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.886513   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.886573   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.886677   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:14.887321   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.887353   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.887705   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:14.888663   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.888725   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.888736   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.888808   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.898905   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.899735   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.903311   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.903352   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.904956   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.905013   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.905907   12488 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-198632"
	I0908 16:38:14.905956   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.906395   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.906436   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.906886   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.906909   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.907798   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.907891   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43103
	I0908 16:38:14.908542   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.908581   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.914857   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.915980   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.916038   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.916463   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.917192   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.917284   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.925210   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45669
	I0908 16:38:14.928817   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44697
	I0908 16:38:14.928826   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.928832   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34713
	I0908 16:38:14.929532   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.929814   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.930013   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.930025   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.930390   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.930570   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.930584   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.931378   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.931427   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.931892   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.931918   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.932001   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33155
	I0908 16:38:14.932194   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.933096   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.933134   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.933440   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.933482   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.933851   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.954736   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.954796   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I0908 16:38:14.954741   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0908 16:38:14.954962   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40937
	I0908 16:38:14.955478   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.955530   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.955771   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43297
	I0908 16:38:14.956230   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35979
	I0908 16:38:14.956337   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40703
	I0908 16:38:14.956339   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I0908 16:38:14.956397   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38411
	I0908 16:38:14.956522   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34669
	I0908 16:38:14.956746   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.956814   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.956831   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.956840   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.957611   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.957639   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.957676   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.957728   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.957807   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.957813   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.957827   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.957880   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.957930   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.957988   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.958008   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.958176   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.958194   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.958223   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45781
	I0908 16:38:14.958248   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:14.958285   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.960801   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.960854   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.960937   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.960957   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.960958   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.960967   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.960978   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.960989   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.961000   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.961037   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.961112   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.961371   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.961407   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.961443   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.961697   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.961838   12488 addons.go:238] Setting addon default-storageclass=true in "addons-198632"
	I0908 16:38:14.961883   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:14.961892   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:14.961893   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:14.962259   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.962283   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.962554   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.962674   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.962686   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.962730   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.962752   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.962734   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.962927   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.962933   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:14.962970   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.963018   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.963060   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.963404   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.963979   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.964023   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.964309   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.964727   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.964755   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.964757   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:14.964831   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.965849   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.965867   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.967985   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.968106   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:14.968161   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:14.968369   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:14.968435   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:14.970134   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:14.970416   12488 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0908 16:38:14.970425   12488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0908 16:38:14.970458   12488 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0908 16:38:14.971807   12488 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0908 16:38:14.972569   12488 out.go:179]   - Using image docker.io/registry:3.0.0
	I0908 16:38:14.972650   12488 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 16:38:14.972968   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0908 16:38:14.972990   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:14.973477   12488 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 16:38:14.973497   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0908 16:38:14.973517   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:14.974721   12488 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0908 16:38:14.974741   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0908 16:38:14.974761   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:14.974750   12488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 16:38:14.975322   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46747
	I0908 16:38:14.975841   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.976533   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.976550   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.976911   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.977281   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:14.977689   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:14.977710   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:14.977736   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:14.977784   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:14.977858   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:14.977913   12488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 16:38:14.978039   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:14.978185   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:14.978329   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:14.979014   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43331
	I0908 16:38:14.979263   12488 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 16:38:14.979283   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0908 16:38:14.979298   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:14.979452   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.980433   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.980459   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.980881   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.981086   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:14.982859   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:14.983300   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:14.983320   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:14.983510   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:14.983560   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:14.983699   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:14.983751   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:14.983853   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:14.983965   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:14.984375   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:14.984398   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:14.984422   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:14.984434   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:14.984447   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:14.984611   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:14.984771   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:14.984813   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:14.984878   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:14.985453   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:14.985612   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:14.985762   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:14.988575   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41945
	I0908 16:38:14.989091   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.989671   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.989689   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.990207   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.990934   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:14.991272   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
	I0908 16:38:14.991746   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:14.992155   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:14.992170   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:14.992550   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:14.992740   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:14.993360   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:14.994454   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:14.995137   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:14.995346   12488 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0908 16:38:14.997229   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42297
	I0908 16:38:14.998137   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41661
	I0908 16:38:14.999389   12488 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0908 16:38:14.999626   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:15.000103   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:15.000120   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:15.000433   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:15.000581   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:15.001667   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:15.002295   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:15.002321   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:15.002737   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:15.003485   12488 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0908 16:38:15.003548   12488 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0908 16:38:15.003558   12488 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0908 16:38:15.003577   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:15.004128   12488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0908 16:38:15.004141   12488 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0908 16:38:15.004157   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:15.004939   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:15.005838   12488 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0908 16:38:15.005854   12488 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0908 16:38:15.005871   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:15.006051   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:15.006107   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:15.006767   12488 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0908 16:38:15.008369   12488 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 16:38:15.008386   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0908 16:38:15.008404   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:15.008981   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.009498   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:15.009537   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.009710   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:15.009898   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:15.010103   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:15.010260   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:15.010649   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45433
	I0908 16:38:15.011056   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38065
	I0908 16:38:15.011528   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:15.011636   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:15.011690   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.011946   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:15.011964   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:15.012156   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:15.012173   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:15.012385   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:15.012592   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:15.012600   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:15.012616   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.012762   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:15.012818   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.013528   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:15.014310   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.014536   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:15.014775   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:15.015462   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:15.015491   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.015848   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:15.016066   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:15.016112   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:15.016171   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:15.016185   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.016411   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:15.016465   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:15.016520   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:15.016556   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:15.016659   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:15.016702   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:15.016742   12488 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 16:38:15.016742   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:15.016895   12488 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0908 16:38:15.017168   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:15.017771   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:15.018620   12488 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 16:38:15.018637   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 16:38:15.018686   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:15.019470   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43595
	I0908 16:38:15.019952   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:15.020448   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:15.020466   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:15.020633   12488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0908 16:38:15.020876   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:15.021065   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:15.022254   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.022968   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:15.022989   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.023148   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:15.023281   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:15.023405   12488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0908 16:38:15.023532   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:15.023748   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:15.023819   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:15.024759   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41987
	I0908 16:38:15.024920   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39243
	I0908 16:38:15.025345   12488 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0908 16:38:15.025432   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:15.025439   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:15.025479   12488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0908 16:38:15.025875   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:15.025892   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:15.025974   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:15.025989   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:15.026314   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:15.026462   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:15.026491   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:15.026899   12488 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0908 16:38:15.026916   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0908 16:38:15.026932   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:15.028265   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I0908 16:38:15.028737   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:15.028835   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:15.029187   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:15.029201   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:15.029524   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:15.029663   12488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0908 16:38:15.029696   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:15.030283   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42141
	I0908 16:38:15.030462   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:15.030863   12488 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0908 16:38:15.031297   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:15.031579   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.032111   12488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0908 16:38:15.032216   12488 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 16:38:15.032228   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0908 16:38:15.032236   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:15.032247   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:15.032254   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.032293   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:15.032297   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:15.032645   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:15.032660   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:15.032903   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:15.032916   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:15.033096   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:15.033105   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:15.033119   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:15.033237   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:15.033379   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:15.033400   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:15.033871   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:15.033891   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:15.033902   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:15.033425   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:15.034318   12488 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0908 16:38:15.035284   12488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0908 16:38:15.036186   12488 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 16:38:15.036203   12488 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 16:38:15.036222   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:15.036316   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:15.036320   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.036342   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:15.036365   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:15.036373   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	W0908 16:38:15.036437   12488 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0908 16:38:15.036633   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:15.036668   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.036976   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:15.037135   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:15.037287   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:15.037464   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:15.038085   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:15.038565   12488 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0908 16:38:15.039486   12488 out.go:179]   - Using image docker.io/busybox:stable
	I0908 16:38:15.039543   12488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0908 16:38:15.039561   12488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0908 16:38:15.039586   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:15.040601   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.040634   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:15.040700   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:15.040724   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.040803   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:15.040923   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:15.041048   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:15.041080   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45337
	I0908 16:38:15.041438   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:15.042033   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:15.042060   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:15.042385   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:15.042562   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:15.042697   12488 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0908 16:38:15.042800   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.043378   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:15.043403   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.043538   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:15.043759   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:15.043901   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:15.044013   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:15.044033   12488 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 16:38:15.044069   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0908 16:38:15.044088   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:15.044174   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:15.044344   12488 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 16:38:15.044362   12488 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 16:38:15.044377   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:15.047096   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.047441   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:15.047469   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.047585   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:15.047710   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:15.047809   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:15.047849   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.047951   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:15.048243   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:15.048258   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:15.048431   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:15.048552   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:15.048642   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:15.048706   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	W0908 16:38:15.393594   12488 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35448->192.168.39.229:22: read: connection reset by peer
	I0908 16:38:15.393627   12488 retry.go:31] will retry after 325.947817ms: ssh: handshake failed: read tcp 192.168.39.1:35448->192.168.39.229:22: read: connection reset by peer
	I0908 16:38:16.012851   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 16:38:16.055152   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 16:38:16.055649   12488 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0908 16:38:16.055671   12488 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0908 16:38:16.058983   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 16:38:16.077978   12488 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:16.078011   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0908 16:38:16.110877   12488 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0908 16:38:16.110912   12488 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0908 16:38:16.114126   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 16:38:16.133087   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 16:38:16.171950   12488 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.320813632s)
	I0908 16:38:16.171963   12488 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.291521602s)
	I0908 16:38:16.172110   12488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 16:38:16.172121   12488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 16:38:16.174003   12488 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0908 16:38:16.174025   12488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0908 16:38:16.283971   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 16:38:16.291366   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 16:38:16.333218   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 16:38:16.338680   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0908 16:38:16.466085   12488 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0908 16:38:16.466114   12488 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0908 16:38:16.477051   12488 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0908 16:38:16.477073   12488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0908 16:38:16.759874   12488 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0908 16:38:16.759900   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0908 16:38:16.782000   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:16.787818   12488 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 16:38:16.787840   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0908 16:38:16.826928   12488 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0908 16:38:16.826954   12488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0908 16:38:17.003929   12488 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0908 16:38:17.003957   12488 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0908 16:38:17.052671   12488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0908 16:38:17.052704   12488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0908 16:38:17.088106   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0908 16:38:17.101635   12488 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0908 16:38:17.101663   12488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0908 16:38:17.115995   12488 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 16:38:17.116017   12488 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 16:38:17.261465   12488 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0908 16:38:17.261493   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0908 16:38:17.315998   12488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0908 16:38:17.316032   12488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0908 16:38:17.420736   12488 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0908 16:38:17.420761   12488 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0908 16:38:17.516187   12488 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 16:38:17.516229   12488 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 16:38:17.625174   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0908 16:38:17.805985   12488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0908 16:38:17.806020   12488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0908 16:38:17.864333   12488 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 16:38:17.864352   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0908 16:38:17.959911   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 16:38:18.218686   12488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0908 16:38:18.218722   12488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0908 16:38:18.408392   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 16:38:18.584085   12488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0908 16:38:18.584108   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0908 16:38:19.109742   12488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0908 16:38:19.109769   12488 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0908 16:38:19.621017   12488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0908 16:38:19.621051   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0908 16:38:20.012009   12488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0908 16:38:20.012032   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0908 16:38:20.345935   12488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 16:38:20.345962   12488 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0908 16:38:20.500113   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 16:38:22.456450   12488 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0908 16:38:22.456499   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:22.459632   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:22.460108   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:22.460135   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:22.460351   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:22.460530   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:22.460701   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:22.460841   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:22.840227   12488 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0908 16:38:22.977564   12488 addons.go:238] Setting addon gcp-auth=true in "addons-198632"
	I0908 16:38:22.977620   12488 host.go:66] Checking if "addons-198632" exists ...
	I0908 16:38:22.977941   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:22.977984   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:22.994478   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41931
	I0908 16:38:22.994952   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:22.995521   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:22.995551   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:22.995957   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:22.996609   12488 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:38:22.996643   12488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:38:23.012809   12488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40185
	I0908 16:38:23.013252   12488 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:38:23.013754   12488 main.go:141] libmachine: Using API Version  1
	I0908 16:38:23.013782   12488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:38:23.014079   12488 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:38:23.014261   12488 main.go:141] libmachine: (addons-198632) Calling .GetState
	I0908 16:38:23.015756   12488 main.go:141] libmachine: (addons-198632) Calling .DriverName
	I0908 16:38:23.015990   12488 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0908 16:38:23.016017   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHHostname
	I0908 16:38:23.019022   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:23.019448   12488 main.go:141] libmachine: (addons-198632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:1c:56", ip: ""} in network mk-addons-198632: {Iface:virbr1 ExpiryTime:2025-09-08 17:37:41 +0000 UTC Type:0 Mac:52:54:00:8f:1c:56 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:addons-198632 Clientid:01:52:54:00:8f:1c:56}
	I0908 16:38:23.019475   12488 main.go:141] libmachine: (addons-198632) DBG | domain addons-198632 has defined IP address 192.168.39.229 and MAC address 52:54:00:8f:1c:56 in network mk-addons-198632
	I0908 16:38:23.019644   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHPort
	I0908 16:38:23.019840   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHKeyPath
	I0908 16:38:23.020019   12488 main.go:141] libmachine: (addons-198632) Calling .GetSSHUsername
	I0908 16:38:23.020149   12488 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/addons-198632/id_rsa Username:docker}
	I0908 16:38:23.928743   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.91585037s)
	I0908 16:38:23.928766   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.869758094s)
	I0908 16:38:23.928795   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.928806   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.928818   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.928841   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.873667112s)
	I0908 16:38:23.928865   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.928807   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.928879   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.928894   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.814744194s)
	I0908 16:38:23.928915   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.928925   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.928972   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.795850205s)
	I0908 16:38:23.929009   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.929020   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.929103   12488 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.756975348s)
	I0908 16:38:23.929161   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.929172   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.929183   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.929183   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.929249   12488 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.757071526s)
	I0908 16:38:23.929266   12488 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0908 16:38:23.929996   12488 node_ready.go:35] waiting up to 6m0s for node "addons-198632" to be "Ready" ...
	I0908 16:38:23.930102   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.930138   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.930150   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.930159   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.930169   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.930284   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.646247165s)
	I0908 16:38:23.930306   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.930315   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.930370   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.930387   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.930402   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.638997766s)
	I0908 16:38:23.930409   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.930417   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.930427   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.930440   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.930560   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.597320589s)
	I0908 16:38:23.930585   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.930593   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.930689   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.591987222s)
	I0908 16:38:23.930714   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.930724   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.930847   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.148819792s)
	W0908 16:38:23.930869   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:23.929191   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.930886   12488 retry.go:31] will retry after 275.004096ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:23.930926   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.842789855s)
	I0908 16:38:23.930947   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.930959   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.930981   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.930994   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.931002   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.931009   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.930959   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.931075   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.305870225s)
	I0908 16:38:23.931097   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.931106   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.931123   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.931138   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.931148   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.931156   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.931214   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.971273247s)
	I0908 16:38:23.931233   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.931244   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.931373   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.522953065s)
	W0908 16:38:23.931403   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 16:38:23.931419   12488 retry.go:31] will retry after 136.655092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 16:38:23.931548   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.931581   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.931590   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.932445   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.932457   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.932465   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.932472   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.932582   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.932602   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.932607   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.932614   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.932620   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.932935   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.932959   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.932964   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.932971   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.932977   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.933005   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.933030   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.933036   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.933178   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.933196   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.933202   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.933209   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.933232   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.933240   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.933246   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.933253   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.933308   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.933316   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.933371   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.933393   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.933413   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.933416   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.933423   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.933439   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.933444   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.933466   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.933485   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.933491   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.933500   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.933506   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.933513   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.933518   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.933532   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.933541   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.933552   12488 addons.go:479] Verifying addon metrics-server=true in "addons-198632"
	I0908 16:38:23.933584   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.933594   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.933603   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.933611   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.933638   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.933653   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.933669   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.933674   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.933682   12488 addons.go:479] Verifying addon registry=true in "addons-198632"
	I0908 16:38:23.933965   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.934018   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.934035   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.934054   12488 addons.go:479] Verifying addon ingress=true in "addons-198632"
	I0908 16:38:23.934152   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.934162   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.934171   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:23.934178   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:23.934371   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.934398   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.934414   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.933291   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.935161   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.935205   12488 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-198632 service yakd-dashboard -n yakd-dashboard
	
	I0908 16:38:23.935216   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.935223   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.936047   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:23.937446   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:23.936071   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:23.938495   12488 out.go:179] * Verifying ingress addon...
	I0908 16:38:23.938575   12488 out.go:179] * Verifying registry addon...
	I0908 16:38:23.940352   12488 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0908 16:38:23.940449   12488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0908 16:38:23.973269   12488 node_ready.go:49] node "addons-198632" is "Ready"
	I0908 16:38:23.973295   12488 node_ready.go:38] duration metric: took 43.259615ms for node "addons-198632" to be "Ready" ...
	I0908 16:38:23.973307   12488 api_server.go:52] waiting for apiserver process to appear ...
	I0908 16:38:23.973348   12488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 16:38:23.998758   12488 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 16:38:23.998784   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:23.998877   12488 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0908 16:38:23.998899   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:24.031271   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:24.031307   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:24.031577   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:24.031634   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	W0908 16:38:24.031733   12488 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0908 16:38:24.063277   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:24.063303   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:24.063664   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:24.063702   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:24.063715   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:24.069147   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 16:38:24.206783   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:24.457965   12488 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-198632" context rescaled to 1 replicas
	I0908 16:38:24.465874   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:24.465985   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:24.957729   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:24.957856   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:25.332793   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.832630563s)
	I0908 16:38:25.332860   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:25.332870   12488 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.316856784s)
	I0908 16:38:25.332889   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:25.332989   12488 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.359620474s)
	I0908 16:38:25.333021   12488 api_server.go:72] duration metric: took 10.481872174s to wait for apiserver process to appear ...
	I0908 16:38:25.333040   12488 api_server.go:88] waiting for apiserver healthz status ...
	I0908 16:38:25.333095   12488 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0908 16:38:25.333196   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:25.333223   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:25.333241   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:25.333254   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:25.333265   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:25.333514   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:25.333527   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:25.333541   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:25.333552   12488 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-198632"
	I0908 16:38:25.334256   12488 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0908 16:38:25.335154   12488 out.go:179] * Verifying csi-hostpath-driver addon...
	I0908 16:38:25.336680   12488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 16:38:25.337207   12488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0908 16:38:25.337909   12488 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0908 16:38:25.337931   12488 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0908 16:38:25.366748   12488 api_server.go:279] https://192.168.39.229:8443/healthz returned 200:
	ok
	I0908 16:38:25.374524   12488 api_server.go:141] control plane version: v1.34.0
	I0908 16:38:25.374551   12488 api_server.go:131] duration metric: took 41.47231ms to wait for apiserver health ...
	I0908 16:38:25.374565   12488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 16:38:25.376501   12488 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 16:38:25.376525   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:25.414074   12488 system_pods.go:59] 20 kube-system pods found
	I0908 16:38:25.414112   12488 system_pods.go:61] "amd-gpu-device-plugin-lhpzm" [2a653b87-ffd2-4e44-9149-6a55a3562d83] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 16:38:25.414127   12488 system_pods.go:61] "coredns-66bc5c9577-7jjdf" [8e86e039-4ebf-4fe7-8034-16e1882f6e45] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 16:38:25.414138   12488 system_pods.go:61] "coredns-66bc5c9577-tfcgh" [49b047ea-0a66-4af1-b221-cb511b51732a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 16:38:25.414147   12488 system_pods.go:61] "csi-hostpath-attacher-0" [26495734-60ab-41e8-b191-e64f6ff61247] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 16:38:25.414154   12488 system_pods.go:61] "csi-hostpath-resizer-0" [0f03347c-e125-4610-a5dc-7adf100f277c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 16:38:25.414163   12488 system_pods.go:61] "csi-hostpathplugin-jczvs" [2e634a2b-cbbc-4f2f-be70-22f498de40e8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 16:38:25.414172   12488 system_pods.go:61] "etcd-addons-198632" [5659fdc4-9fb8-4a66-a8e0-a71f8faf5040] Running
	I0908 16:38:25.414179   12488 system_pods.go:61] "kube-apiserver-addons-198632" [a16e8ffb-9009-4936-b434-39d2127ec9db] Running
	I0908 16:38:25.414188   12488 system_pods.go:61] "kube-controller-manager-addons-198632" [f810e648-8841-4aee-b993-237758596de7] Running
	I0908 16:38:25.414196   12488 system_pods.go:61] "kube-ingress-dns-minikube" [eb618d0d-794b-4274-8651-39bb05d68842] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 16:38:25.414204   12488 system_pods.go:61] "kube-proxy-6dnhn" [56138bc7-98e7-4d65-829a-1d69a7102e66] Running
	I0908 16:38:25.414208   12488 system_pods.go:61] "kube-scheduler-addons-198632" [8ae1055a-cfb0-4d38-88be-cef76c82c04b] Running
	I0908 16:38:25.414215   12488 system_pods.go:61] "metrics-server-85b7d694d7-lxt6t" [a7f04275-2bea-4a1b-a130-7c1ce5d784b4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 16:38:25.414223   12488 system_pods.go:61] "nvidia-device-plugin-daemonset-95xl2" [5441acea-71a3-45ab-b5f1-235609d5d13c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 16:38:25.414240   12488 system_pods.go:61] "registry-66898fdd98-2rfbq" [2e4a3af7-b70f-45f1-a394-ac4021197c28] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 16:38:25.414250   12488 system_pods.go:61] "registry-creds-764b6fb674-rmnns" [f1d2a51e-3c61-48d6-a31f-9dfd939a0a8c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 16:38:25.414265   12488 system_pods.go:61] "registry-proxy-np9l2" [8effc7a9-f128-4316-b4c5-e7bfa6c1a551] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 16:38:25.414276   12488 system_pods.go:61] "snapshot-controller-7d9fbc56b8-k8ttp" [2126cc9d-ee7d-41b0-9e5d-db166ef3fa0a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 16:38:25.414294   12488 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ntt5b" [91a5e259-4481-422b-aa70-0041da7c4074] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 16:38:25.414304   12488 system_pods.go:61] "storage-provisioner" [748f19a7-41a5-4f1a-9a72-dff4a2174162] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 16:38:25.414313   12488 system_pods.go:74] duration metric: took 39.74179ms to wait for pod list to return data ...
	I0908 16:38:25.414326   12488 default_sa.go:34] waiting for default service account to be created ...
	I0908 16:38:25.419607   12488 default_sa.go:45] found service account: "default"
	I0908 16:38:25.419631   12488 default_sa.go:55] duration metric: took 5.29462ms for default service account to be created ...
	I0908 16:38:25.419642   12488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 16:38:25.426161   12488 system_pods.go:86] 20 kube-system pods found
	I0908 16:38:25.426196   12488 system_pods.go:89] "amd-gpu-device-plugin-lhpzm" [2a653b87-ffd2-4e44-9149-6a55a3562d83] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 16:38:25.426209   12488 system_pods.go:89] "coredns-66bc5c9577-7jjdf" [8e86e039-4ebf-4fe7-8034-16e1882f6e45] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 16:38:25.426228   12488 system_pods.go:89] "coredns-66bc5c9577-tfcgh" [49b047ea-0a66-4af1-b221-cb511b51732a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 16:38:25.426237   12488 system_pods.go:89] "csi-hostpath-attacher-0" [26495734-60ab-41e8-b191-e64f6ff61247] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 16:38:25.426244   12488 system_pods.go:89] "csi-hostpath-resizer-0" [0f03347c-e125-4610-a5dc-7adf100f277c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 16:38:25.426261   12488 system_pods.go:89] "csi-hostpathplugin-jczvs" [2e634a2b-cbbc-4f2f-be70-22f498de40e8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 16:38:25.426270   12488 system_pods.go:89] "etcd-addons-198632" [5659fdc4-9fb8-4a66-a8e0-a71f8faf5040] Running
	I0908 16:38:25.426283   12488 system_pods.go:89] "kube-apiserver-addons-198632" [a16e8ffb-9009-4936-b434-39d2127ec9db] Running
	I0908 16:38:25.426290   12488 system_pods.go:89] "kube-controller-manager-addons-198632" [f810e648-8841-4aee-b993-237758596de7] Running
	I0908 16:38:25.426302   12488 system_pods.go:89] "kube-ingress-dns-minikube" [eb618d0d-794b-4274-8651-39bb05d68842] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 16:38:25.426311   12488 system_pods.go:89] "kube-proxy-6dnhn" [56138bc7-98e7-4d65-829a-1d69a7102e66] Running
	I0908 16:38:25.426317   12488 system_pods.go:89] "kube-scheduler-addons-198632" [8ae1055a-cfb0-4d38-88be-cef76c82c04b] Running
	I0908 16:38:25.426328   12488 system_pods.go:89] "metrics-server-85b7d694d7-lxt6t" [a7f04275-2bea-4a1b-a130-7c1ce5d784b4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 16:38:25.426343   12488 system_pods.go:89] "nvidia-device-plugin-daemonset-95xl2" [5441acea-71a3-45ab-b5f1-235609d5d13c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 16:38:25.426354   12488 system_pods.go:89] "registry-66898fdd98-2rfbq" [2e4a3af7-b70f-45f1-a394-ac4021197c28] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 16:38:25.426367   12488 system_pods.go:89] "registry-creds-764b6fb674-rmnns" [f1d2a51e-3c61-48d6-a31f-9dfd939a0a8c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 16:38:25.426383   12488 system_pods.go:89] "registry-proxy-np9l2" [8effc7a9-f128-4316-b4c5-e7bfa6c1a551] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 16:38:25.426393   12488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k8ttp" [2126cc9d-ee7d-41b0-9e5d-db166ef3fa0a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 16:38:25.426407   12488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ntt5b" [91a5e259-4481-422b-aa70-0041da7c4074] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 16:38:25.426417   12488 system_pods.go:89] "storage-provisioner" [748f19a7-41a5-4f1a-9a72-dff4a2174162] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 16:38:25.426428   12488 system_pods.go:126] duration metric: took 6.778713ms to wait for k8s-apps to be running ...
	I0908 16:38:25.426442   12488 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 16:38:25.426497   12488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 16:38:25.450056   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:25.450121   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:25.630208   12488 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0908 16:38:25.630232   12488 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0908 16:38:25.732448   12488 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 16:38:25.732471   12488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0908 16:38:25.814178   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 16:38:25.848019   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:25.956539   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:25.966188   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:26.345760   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:26.448602   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:26.448713   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:26.842070   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:26.947027   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:26.948937   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:27.177727   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.108534574s)
	I0908 16:38:27.177793   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:27.177813   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:27.178091   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:27.178139   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:27.178157   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:27.178158   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:27.178165   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:27.178399   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:27.178419   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:27.178430   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:27.382632   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:27.495669   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:27.495718   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:27.844487   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:27.886359   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.679528487s)
	W0908 16:38:27.886409   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:27.886431   12488 retry.go:31] will retry after 280.406171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:27.886479   12488 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.459944422s)
	I0908 16:38:27.886512   12488 system_svc.go:56] duration metric: took 2.460067582s WaitForService to wait for kubelet
	I0908 16:38:27.886522   12488 kubeadm.go:578] duration metric: took 13.035373754s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 16:38:27.886570   12488 node_conditions.go:102] verifying NodePressure condition ...
	I0908 16:38:27.886541   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.072332966s)
	I0908 16:38:27.886694   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:27.886714   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:27.887004   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:27.887019   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:27.887028   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:38:27.887034   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:38:27.887253   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:38:27.887269   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:38:27.887307   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:38:27.888300   12488 addons.go:479] Verifying addon gcp-auth=true in "addons-198632"
	I0908 16:38:27.890077   12488 out.go:179] * Verifying gcp-auth addon...
	I0908 16:38:27.892389   12488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0908 16:38:27.896659   12488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 16:38:27.896681   12488 node_conditions.go:123] node cpu capacity is 2
	I0908 16:38:27.896696   12488 node_conditions.go:105] duration metric: took 10.117271ms to run NodePressure ...
	I0908 16:38:27.896708   12488 start.go:241] waiting for startup goroutines ...
	I0908 16:38:27.901121   12488 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0908 16:38:27.901137   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:27.950434   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:27.950630   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:28.167892   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:28.345369   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:28.396843   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:28.455399   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:28.455489   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:28.849740   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:28.896967   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:28.945728   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:28.948769   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:29.348237   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:29.399441   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:29.449526   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:29.449573   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:29.485173   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.317185044s)
	W0908 16:38:29.485215   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:29.485239   12488 retry.go:31] will retry after 287.127053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:29.772731   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:29.842979   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:29.898631   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:29.952394   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:29.953399   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:30.345532   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:30.397768   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:30.444021   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:30.445127   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:30.844699   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:30.896812   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:30.952139   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:30.952269   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:31.077840   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.305064738s)
	W0908 16:38:31.077876   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:31.077894   12488 retry.go:31] will retry after 631.717628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:31.347969   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:31.397447   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:31.446768   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:31.449960   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:31.710356   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:31.849968   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:31.897704   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:31.947870   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:31.947998   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:32.346926   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:32.396981   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:32.445177   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:32.450040   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:32.843541   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:32.875726   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.165316616s)
	W0908 16:38:32.875770   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:32.875797   12488 retry.go:31] will retry after 1.683773013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:32.897367   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:32.946364   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:32.950919   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:33.343901   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:33.399209   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:33.447698   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:33.454290   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:33.840484   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:33.897221   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:33.945303   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:33.945365   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:34.344001   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:34.397905   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:34.447230   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:34.449014   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:34.560188   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:34.840833   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:34.899949   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:34.947433   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:34.950770   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:35.341998   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:35.482991   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:35.489190   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:35.490870   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:35.844870   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:35.850508   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.290281286s)
	W0908 16:38:35.850540   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:35.850561   12488 retry.go:31] will retry after 1.097830165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:35.895657   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:35.946224   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:35.946632   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:36.341149   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:36.531786   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:36.532768   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:36.533300   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:36.841875   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:36.896166   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:36.946437   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:36.947379   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:36.949471   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:37.342331   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:37.399391   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:37.445730   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:37.445786   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:37.843815   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:37.899204   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:37.944137   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:37.945868   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:38.084672   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.135164454s)
	W0908 16:38:38.084719   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:38.084736   12488 retry.go:31] will retry after 3.054681091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:38.345690   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:38.861238   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:38.866509   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:38.866956   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:38.867545   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:38.967574   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:38.967670   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:38.968112   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:39.341925   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:39.396294   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:39.444358   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:39.444711   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:39.843474   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:39.897296   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:39.947405   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:39.948726   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:40.342339   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:40.397965   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:40.446079   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:40.448254   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:40.845582   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:41.074279   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:41.076399   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:41.078161   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:41.140627   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:41.342856   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:41.396724   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:41.445846   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:41.447135   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:41.857705   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:41.896386   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:41.945578   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:41.947302   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 16:38:42.068819   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:42.068859   12488 retry.go:31] will retry after 3.331656696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:42.341406   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:42.397088   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:42.444470   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:42.445145   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:42.841664   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:42.895782   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:42.944054   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:42.944332   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:43.341800   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:43.395661   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:43.445302   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:43.445580   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:43.842418   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:43.898234   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:43.947840   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:43.948626   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:44.341381   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:44.396939   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:44.445095   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:44.447005   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:44.840537   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:44.897299   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:44.997705   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:44.998071   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:45.341384   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:45.396689   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:45.400897   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:45.444477   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:45.444635   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:45.842125   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:45.897278   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:45.944189   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:45.945138   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 16:38:46.140357   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:46.140399   12488 retry.go:31] will retry after 7.895616158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:46.341290   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:46.397382   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:46.445254   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:46.445408   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:46.845403   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:46.896768   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:46.944740   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:46.944898   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:47.341743   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:47.395964   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:47.444037   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:47.445344   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:47.846356   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:47.900973   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:47.949277   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:47.951111   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:48.341059   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:48.397701   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:48.446016   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:48.447037   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:48.847550   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:48.895354   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:48.948435   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:48.948723   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:49.343588   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:49.398040   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:49.444024   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:49.446833   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:49.842960   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:49.899354   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:49.948459   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:49.951161   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:50.343960   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:50.397686   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:50.447742   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:50.449188   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:50.848181   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:50.896685   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:50.943897   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:50.945018   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:51.344861   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:51.398508   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:51.445036   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:51.446886   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:51.842909   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:51.896701   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:51.944174   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:51.944487   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:52.341711   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:52.397390   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:52.445657   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:52.447320   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:52.842202   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:52.925742   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:52.945946   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:52.947623   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:53.350084   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:53.397281   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:53.446154   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:53.447161   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:53.844537   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:53.901727   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:53.944888   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:53.944932   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:54.037177   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:38:54.342271   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:54.399835   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:54.445343   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:54.447601   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 16:38:54.845435   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:54.845486   12488 retry.go:31] will retry after 8.26672771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:38:54.849131   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:54.898593   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:54.998941   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:54.998957   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:55.340689   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:55.396865   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:55.444098   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:55.444414   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:55.846480   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:55.898242   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:55.947171   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:55.950153   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:56.342939   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:56.395974   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:56.447544   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:56.448337   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:56.842401   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:56.898104   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:56.948151   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:56.948258   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:57.342550   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:57.402537   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:57.446724   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:57.448873   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:57.844168   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:57.898391   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:57.945578   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:57.946197   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:58.346838   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:58.401709   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:58.444507   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:58.445071   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:58.842918   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:58.897019   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:58.944918   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:58.945454   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:59.342485   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:59.399211   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:59.444480   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:38:59.444955   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:59.840801   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:38:59.941214   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:38:59.944654   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:38:59.945622   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:00.341853   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:00.395767   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:00.445052   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:00.445222   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:00.841829   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:00.896345   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:00.944467   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:00.945728   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:01.341947   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:01.396368   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:01.445779   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:01.446032   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:01.840918   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:01.897004   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:01.948545   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:01.951631   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:02.343998   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:02.396554   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:02.448007   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:02.448886   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:02.842078   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:02.899330   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:02.945809   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:02.947280   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:03.112562   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:39:03.342588   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:03.397267   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:03.445503   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:03.448752   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:03.846832   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:03.895944   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 16:39:03.908379   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:39:03.908457   12488 retry.go:31] will retry after 19.181748484s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:39:03.949067   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:03.949093   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:04.341628   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:04.395956   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:04.443416   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:04.445786   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:04.841284   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:04.896160   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:04.944698   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:04.944830   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:05.341916   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:05.396498   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:05.449472   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:05.449512   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:05.841939   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:05.896541   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:05.947475   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:05.947782   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:06.341138   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:06.399806   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:06.445872   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:06.453476   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:06.841048   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:06.896181   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:06.944611   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:06.945757   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:07.342220   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:07.397606   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:07.443646   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:07.447311   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:07.846719   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:07.899829   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:07.950068   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:07.950798   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:08.342058   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:08.397229   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:08.445846   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:08.447525   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:08.841768   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:08.895725   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:08.946799   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 16:39:08.946889   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:09.341835   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:09.396454   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:09.444560   12488 kapi.go:107] duration metric: took 45.504108184s to wait for kubernetes.io/minikube-addons=registry ...
	I0908 16:39:09.445398   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:09.841688   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:09.896086   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:09.944279   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:10.342565   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:10.396433   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:10.444279   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:10.845122   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:10.898730   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:10.953404   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:11.343711   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:11.526711   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:11.527575   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:11.842905   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:11.896502   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:11.943805   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:12.342779   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:12.396082   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:12.444998   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:12.842418   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:12.897325   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:12.945185   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:13.341214   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:13.400875   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:13.444331   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:13.841761   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:13.897201   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:13.944594   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:14.347697   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:14.398729   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:14.446389   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:14.843331   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:14.896664   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:14.947698   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:15.345193   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:15.398510   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:15.445274   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:15.847109   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:15.896364   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:15.945904   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:16.344083   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:16.399824   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:16.449200   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:16.843389   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:16.897778   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:16.944363   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:17.367217   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:17.396517   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:17.446960   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:17.891835   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:17.900991   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:17.954774   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:18.342415   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:18.398042   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:18.447044   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:18.844996   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:18.896345   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:18.944482   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:19.341933   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:19.399680   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:19.445750   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:19.842445   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:19.902571   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:20.179024   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:20.346928   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:20.400056   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:20.448615   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:20.842661   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:20.897211   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:20.946946   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:21.341414   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:21.410147   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:21.445723   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:21.842556   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:21.896260   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:21.945325   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:22.343757   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:22.396174   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:22.446407   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:22.841302   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:22.896329   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:22.945149   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:23.091429   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:39:23.345786   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:23.399966   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:23.446591   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:23.842674   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:23.896729   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:23.946157   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 16:39:24.071396   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:39:24.071435   12488 retry.go:31] will retry after 11.356984152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:39:24.344595   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:24.396142   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:24.443885   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:24.844287   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:24.898605   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:24.947617   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:25.340890   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:25.396076   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:25.445190   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:25.840867   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:25.896484   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:25.943836   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:26.348061   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:26.397783   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:26.446738   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:26.841670   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:26.895733   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:26.944174   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:27.341324   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:27.396586   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:27.443981   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:27.959358   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:27.960500   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:27.961437   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:28.341614   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:28.398878   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:28.445692   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:28.846774   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:28.896443   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:28.946376   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:29.342959   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:29.403029   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:29.444857   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:29.841705   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:29.899014   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:29.944837   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:30.472874   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:30.473201   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:30.473418   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:30.842490   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:30.895758   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:30.944478   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:31.347547   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:31.399097   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:31.499914   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:31.845515   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:31.900667   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:31.947926   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:32.346079   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:32.398324   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:32.448686   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:32.842818   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:32.895914   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:32.944351   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:33.344696   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:33.401144   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:33.447554   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:33.842708   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:33.895759   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:33.954632   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:34.343415   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:34.423236   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:34.444425   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:34.841535   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:34.898505   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:34.947724   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:35.341650   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:35.396673   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:35.428824   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:39:35.445765   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:35.842748   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:35.898770   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:35.948170   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:36.342755   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:36.400841   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:36.450716   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:36.562106   12488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.133238622s)
	W0908 16:39:36.562162   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:39:36.562185   12488 retry.go:31] will retry after 24.642831953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:39:36.842637   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:36.897388   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:36.947528   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:37.346118   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:37.398528   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:37.448590   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:37.853192   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:37.897184   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:37.997620   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:38.345426   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:38.444694   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:38.448347   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:38.847345   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:38.897739   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:38.945296   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:39.341342   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:39.405323   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:39.444815   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:39.843413   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:39.896396   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:39.945844   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:40.342832   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:40.397605   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:40.444170   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:40.858377   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:40.901358   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:40.964498   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:41.342127   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:41.396612   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:41.444669   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:41.845324   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:41.897097   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:41.946586   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:42.344019   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:42.396467   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:42.447182   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:42.841576   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:42.895632   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:42.943798   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:43.346716   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:43.401790   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:43.444589   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:43.843757   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:43.895706   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:43.945474   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:44.343241   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:44.398394   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:44.446386   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:44.841883   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:44.897657   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:44.947443   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:45.342122   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:45.400340   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:45.498545   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:45.842509   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:45.896797   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:45.945308   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:46.341937   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:46.395924   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:46.445236   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:46.848002   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:46.899821   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:46.949110   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:47.341538   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 16:39:47.395896   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:47.444635   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:47.842798   12488 kapi.go:107] duration metric: took 1m22.505585788s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0908 16:39:47.895950   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:47.944272   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:48.395684   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:48.444566   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:48.896782   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:48.944646   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:49.395907   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:49.444261   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:49.895677   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:49.944546   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:50.395759   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:50.444251   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:50.896452   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:50.943724   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:51.397968   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:51.444721   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:51.897626   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:51.943927   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:52.397149   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:52.445723   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:52.897407   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:52.945179   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:53.398441   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:53.443908   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:53.896922   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:53.944435   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:54.396710   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:54.444656   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:54.896479   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:54.944079   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:55.395850   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:55.444788   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:55.896597   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:55.944196   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:56.395746   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:56.444704   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:56.897802   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:56.944026   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:57.397030   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:57.444509   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:57.896109   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:57.944716   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:58.396563   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:58.444709   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:58.898415   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:58.946156   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:59.395865   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:59.443792   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:39:59.897541   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:39:59.944704   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:00.397327   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:00.445637   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:00.896717   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:00.944684   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:01.206057   12488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 16:40:01.402167   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:01.446636   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:01.900444   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 16:40:01.941229   12488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 16:40:01.941310   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:40:01.941324   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:40:01.941624   12488 main.go:141] libmachine: (addons-198632) DBG | Closing plugin on server side
	I0908 16:40:01.941669   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:40:01.941680   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 16:40:01.941695   12488 main.go:141] libmachine: Making call to close driver server
	I0908 16:40:01.941708   12488 main.go:141] libmachine: (addons-198632) Calling .Close
	I0908 16:40:01.941952   12488 main.go:141] libmachine: Successfully made call to close driver server
	I0908 16:40:01.941973   12488 main.go:141] libmachine: Making call to close connection to plugin binary
	W0908 16:40:01.942062   12488 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0908 16:40:01.945090   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:02.396736   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:02.445695   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:02.897434   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:02.945348   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:03.398759   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:03.444522   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:03.895934   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:03.944981   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:04.395262   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:04.444179   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:04.896465   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:04.944805   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:05.397234   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:05.446343   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:05.896615   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:05.944424   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:06.396740   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:06.443901   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:06.897718   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:06.944155   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:07.395922   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:07.445011   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:07.896489   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:07.944001   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:08.397313   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:08.445029   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:08.897128   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:08.944557   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:09.397624   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:09.448045   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:09.897730   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:09.944123   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:10.397019   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:10.444157   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:10.897175   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:10.945118   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:11.397307   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:11.444741   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:11.897887   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:11.944365   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:12.396453   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:12.443798   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:12.899832   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:12.944573   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:13.400406   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:13.444246   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:13.896055   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:13.944262   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:14.396057   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:14.443553   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:14.897504   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:14.944472   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:15.396932   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:15.444805   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:15.896912   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:15.944635   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:16.397119   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:16.444362   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:16.896333   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:16.945762   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:17.396107   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:17.444695   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:17.896395   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:17.944674   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:18.396740   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:18.443988   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:18.899431   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:18.945591   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:19.395434   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:19.445393   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:19.896540   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:19.946156   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:20.396353   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:20.444897   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:20.898964   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:20.945179   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:21.396513   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:21.443717   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:21.896404   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:21.943930   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:22.396482   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:22.444489   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:22.899216   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:22.945175   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:23.395693   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:23.444657   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:23.895904   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:23.944454   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:24.397110   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:24.443889   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:24.897155   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:24.944886   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:25.396154   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:25.445043   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:25.896739   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:25.944979   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:26.396474   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:26.444084   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:26.897315   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:26.947326   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:27.395992   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:27.444558   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:27.896470   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:27.943814   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:28.396669   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:28.445691   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:28.898237   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:28.999165   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:29.399700   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:29.445821   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:29.897501   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:29.944194   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:30.396901   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:30.444958   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:30.896582   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:30.945128   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:31.398914   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:31.443930   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:31.897194   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:31.944901   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:32.397258   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:32.445635   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:32.896755   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:32.945167   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:33.396845   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:33.444404   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:33.895872   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:33.944154   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:34.395827   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:34.444081   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:34.895963   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:34.945813   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:35.395951   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:35.444187   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:35.896117   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:35.944500   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:36.397758   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:36.444326   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:36.896412   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:36.944316   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:37.396977   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:37.445271   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:37.895598   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:37.943687   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:38.396215   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:38.444576   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:38.897242   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:38.944302   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:39.400013   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:39.446265   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:39.896317   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:39.945027   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:40.397140   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:40.444568   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:40.898494   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:40.949592   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:41.399365   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:41.447586   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:41.906590   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:41.949907   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:42.397435   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:42.443701   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:42.897298   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:42.946126   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:43.400564   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:43.447841   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:43.897659   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:43.946867   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:44.398160   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:44.447500   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:44.896619   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:44.947698   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:45.399090   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:45.445511   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:45.902783   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:45.944855   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:46.399238   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:46.499059   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:46.897242   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:46.944791   12488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 16:40:47.414797   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:47.449630   12488 kapi.go:107] duration metric: took 2m23.509272435s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0908 16:40:47.897638   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:48.399060   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:48.896793   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:49.399492   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:49.897667   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:50.396472   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:50.897430   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:51.397683   12488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 16:40:51.897152   12488 kapi.go:107] duration metric: took 2m24.004758283s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0908 16:40:51.898929   12488 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-198632 cluster.
	I0908 16:40:51.900332   12488 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0908 16:40:51.901663   12488 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0908 16:40:51.903247   12488 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, registry-creds, amd-gpu-device-plugin, metrics-server, storage-provisioner, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0908 16:40:51.904712   12488 addons.go:514] duration metric: took 2m37.053537905s for enable addons: enabled=[nvidia-device-plugin ingress-dns registry-creds amd-gpu-device-plugin metrics-server storage-provisioner cloud-spanner yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0908 16:40:51.904748   12488 start.go:246] waiting for cluster config update ...
	I0908 16:40:51.904776   12488 start.go:255] writing updated cluster config ...
	I0908 16:40:51.905042   12488 ssh_runner.go:195] Run: rm -f paused
	I0908 16:40:51.913314   12488 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 16:40:51.916718   12488 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tfcgh" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:51.923233   12488 pod_ready.go:94] pod "coredns-66bc5c9577-tfcgh" is "Ready"
	I0908 16:40:51.923254   12488 pod_ready.go:86] duration metric: took 6.51549ms for pod "coredns-66bc5c9577-tfcgh" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:51.925162   12488 pod_ready.go:83] waiting for pod "etcd-addons-198632" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:51.930766   12488 pod_ready.go:94] pod "etcd-addons-198632" is "Ready"
	I0908 16:40:51.930784   12488 pod_ready.go:86] duration metric: took 5.604868ms for pod "etcd-addons-198632" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:51.932983   12488 pod_ready.go:83] waiting for pod "kube-apiserver-addons-198632" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:51.937347   12488 pod_ready.go:94] pod "kube-apiserver-addons-198632" is "Ready"
	I0908 16:40:51.937364   12488 pod_ready.go:86] duration metric: took 4.366355ms for pod "kube-apiserver-addons-198632" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:51.939974   12488 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-198632" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:52.317257   12488 pod_ready.go:94] pod "kube-controller-manager-addons-198632" is "Ready"
	I0908 16:40:52.317280   12488 pod_ready.go:86] duration metric: took 377.289936ms for pod "kube-controller-manager-addons-198632" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:52.518814   12488 pod_ready.go:83] waiting for pod "kube-proxy-6dnhn" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:52.917813   12488 pod_ready.go:94] pod "kube-proxy-6dnhn" is "Ready"
	I0908 16:40:52.917839   12488 pod_ready.go:86] duration metric: took 398.998997ms for pod "kube-proxy-6dnhn" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:53.117847   12488 pod_ready.go:83] waiting for pod "kube-scheduler-addons-198632" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:53.517867   12488 pod_ready.go:94] pod "kube-scheduler-addons-198632" is "Ready"
	I0908 16:40:53.517896   12488 pod_ready.go:86] duration metric: took 400.023451ms for pod "kube-scheduler-addons-198632" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 16:40:53.517907   12488 pod_ready.go:40] duration metric: took 1.604562247s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 16:40:53.576324   12488 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 16:40:53.577898   12488 out.go:179] * Done! kubectl is now configured to use "addons-198632" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.134818567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e5dcb7c-b62a-466c-87cb-52167fca9673 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.135492336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:901f13843ff7ecab9644844ca943f4919bd4919cf01f57e32f2349dbae7884dd,PodSandboxId:78541438e1325ae9572148da2bfcab6ce4585f0cc3dd9572acc93838f2df8419,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757349704232599705,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 81e6f15d-59f8-4450-a2eb-847b8cb17a16,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ac876c9808be7d28f308f76b1fbe1537bd434cea97f258c51901bbf98b73c7,PodSandboxId:628e4e17bae98312010e9ab80df0351006a5ff62735fbf7cb9a74234f1a589fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757349658371022759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68ecb2ec-8675-4c4b-8edf-5e76a3a05382,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a261849ba738a4b9b43cee646c9df885e3394b920d14aa3795dee05cf7dd1c5b,PodSandboxId:77d05dd56f988530114e99363d8ab062141c6ccd9b8d7ec01014c198528b5adf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757349646338863352,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-sh6ww,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85869be2-4ec1-4f66-8b29-edc0503e3d9e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ff0aaf64aa7d3dc8a0192cd74f9a4a25a367dca9665dbc71c5f6224bfa0cfc7e,PodSandboxId:07eba33269ca5feaf775556fb3b9789da96602b6243b3fe48604587dabb14de2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1757349573419295357,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ts5cj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 555604c3-1cec-4a45-9c3f-674b79cf2807,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de013bc9b7ddcf19c8863e68be7adb43ac024f1db9c01c95238fbd0bea7e670c,PodSandboxId:7f950ce80fbb2fa3ea4215d45ed4f722c19fa5def395a9e876bec80458146b10,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757349572580988585,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-b8786,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 75ce458d-2c03-41b8-9c22-b5c00118dffd,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1747fe43c57c8515bded12451707bb5dcb914336701227d328fadc67dd4252,PodSandboxId:930fa56bf70706017f7f7c1aabdab168372a2ba71ec07560b4fe6e31159987cc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c08
45d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757349568623840049,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-fs8hb,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 28f4e5bf-bd39-48fa-93c9-f5c6e419bd61,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc8dac858aab976ce42f418788c0f78fe77e61ca939ccf9d7d6a5a5c46e59f5,PodSandboxId:a0553e7917f21974f98673f546530e3b1eceacdd1490e8577086e6e26609f43d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757349561545596056,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb618d0d-794b-4274-8651-39bb05d68842,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8690485777af4102a906134ba418143390f9b902deb45cf57cec4d1ae5e979be,PodSandboxId:8e5d902a4655df4ce55c3f8869d44be91f748ba2a342e19119b47134bac1bb0d,Metadata:&ContainerMetadata{Name:local-p
ath-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1757349551741514887,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-58vbw,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 03ab7b3e-f236-43db-b901-c1abbe26e293,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6f7c8b93186af76651cd3d0ba1a4f47d46bb43a7f35ada5e48943f8c4a364b,PodSandboxId:66f04a25860d4c7bf6add79e349013174803
629f748c72670ccf692372446b8c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757349524875381170,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lhpzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a653b87-ffd2-4e44-9149-6a55a3562d83,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad31740b5a40ce11aa72c79d8b03f7004e039d11651c0ac684e81b9315c0cf3,PodSandbo
xId:f7cd6ab39946448c99e29f699f116d69cf86098abe1a3da5ad799a950976a130,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757349505268906130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 748f19a7-41a5-4f1a-9a72-dff4a2174162,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99d847bb50459e825e8c64f96b431b250dd4e34e75f2095eb3035fcc78495b4,PodSandboxId:75aedfac
3525cc3c91447490084fd28edc58771676f711715794556e701396ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757349496345200207,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tfcgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49b047ea-0a66-4af1-b221-cb511b51732a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f289b2d369f6dc210216636cb4e3f8ef4a3e29010551272e91d5e6967fe9ae12,PodSandboxId:8fdd8c33b085e048b1cfdfc0a48933034036b9ab755fb569df601ff08af9e5bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757349495149716226,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6dnhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56138bc7-98e7-4d65-829a-1d69a7102e66,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f848565c2d06cdcdb359cdc38a1081c916249eef558157d015748df57581bf4d,PodSandboxId:a70b0b6b25aaee18b12be6028122ced63c3d123a775f5ec3d4fcdc86290c0ee4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757349483809884004,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-198632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869e2d8fef8ed9b168447dcf4910037a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-p
ort\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32722aed0cdde02b2503db43d822b9f4618039f5c36a4ad6777ca030a1d435bc,PodSandboxId:12c17d0739e79a128e52240f3f7916047119abf5de1d19752db84fb68b64b498,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757349483811921014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-198632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4077c7e95cfd57c4049d7803f77d668,},Annotations:map[
string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba8204096f16cc20b6aa0d81d3042f7bd13a22c8a84b295ef817db4026adcf0,PodSandboxId:e205012a02345889fb4cad0a89c6667539c104759e3b29cacc9f40a99b9ace6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757349483777109171,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-198632,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9d09ff6fd4ec11dd6ec58e5295f249,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e36da9d1e71c5e49505742d852dcf4613e52d6b5e38d6a4c654b6bde4a12fa3,PodSandboxId:6f660d06bb8911ddf7fbaf0c4197f765e96430317ec42142fc61cbbab237b44c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757349483721863719,Labels:map[string]s
tring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-198632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864f7f9a89c0893e7f6e63e7585d2612,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e5dcb7c-b62a-466c-87cb-52167fca9673 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.178775622Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b34a0dba-3ad4-4778-85de-0711d4984694 name=/runtime.v1.RuntimeService/Version
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.178974268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b34a0dba-3ad4-4778-85de-0711d4984694 name=/runtime.v1.RuntimeService/Version
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.180804202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d7f8590-93c0-4116-ad9f-d972841ec0da name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.183297780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757349849183266760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d7f8590-93c0-4116-ad9f-d972841ec0da name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.184657471Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1921e89b-c6ce-4e29-8929-fe0564beffe9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.184730689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1921e89b-c6ce-4e29-8929-fe0564beffe9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.185079180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:901f13843ff7ecab9644844ca943f4919bd4919cf01f57e32f2349dbae7884dd,PodSandboxId:78541438e1325ae9572148da2bfcab6ce4585f0cc3dd9572acc93838f2df8419,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757349704232599705,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 81e6f15d-59f8-4450-a2eb-847b8cb17a16,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ac876c9808be7d28f308f76b1fbe1537bd434cea97f258c51901bbf98b73c7,PodSandboxId:628e4e17bae98312010e9ab80df0351006a5ff62735fbf7cb9a74234f1a589fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757349658371022759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68ecb2ec-8675-4c4b-8edf-5e76a3a05382,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a261849ba738a4b9b43cee646c9df885e3394b920d14aa3795dee05cf7dd1c5b,PodSandboxId:77d05dd56f988530114e99363d8ab062141c6ccd9b8d7ec01014c198528b5adf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757349646338863352,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-sh6ww,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85869be2-4ec1-4f66-8b29-edc0503e3d9e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ff0aaf64aa7d3dc8a0192cd74f9a4a25a367dca9665dbc71c5f6224bfa0cfc7e,PodSandboxId:07eba33269ca5feaf775556fb3b9789da96602b6243b3fe48604587dabb14de2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1757349573419295357,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ts5cj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 555604c3-1cec-4a45-9c3f-674b79cf2807,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de013bc9b7ddcf19c8863e68be7adb43ac024f1db9c01c95238fbd0bea7e670c,PodSandboxId:7f950ce80fbb2fa3ea4215d45ed4f722c19fa5def395a9e876bec80458146b10,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757349572580988585,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-b8786,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 75ce458d-2c03-41b8-9c22-b5c00118dffd,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1747fe43c57c8515bded12451707bb5dcb914336701227d328fadc67dd4252,PodSandboxId:930fa56bf70706017f7f7c1aabdab168372a2ba71ec07560b4fe6e31159987cc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c08
45d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757349568623840049,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-fs8hb,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 28f4e5bf-bd39-48fa-93c9-f5c6e419bd61,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc8dac858aab976ce42f418788c0f78fe77e61ca939ccf9d7d6a5a5c46e59f5,PodSandboxId:a0553e7917f21974f98673f546530e3b1eceacdd1490e8577086e6e26609f43d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757349561545596056,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb618d0d-794b-4274-8651-39bb05d68842,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8690485777af4102a906134ba418143390f9b902deb45cf57cec4d1ae5e979be,PodSandboxId:8e5d902a4655df4ce55c3f8869d44be91f748ba2a342e19119b47134bac1bb0d,Metadata:&ContainerMetadata{Name:local-p
ath-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1757349551741514887,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-58vbw,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 03ab7b3e-f236-43db-b901-c1abbe26e293,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6f7c8b93186af76651cd3d0ba1a4f47d46bb43a7f35ada5e48943f8c4a364b,PodSandboxId:66f04a25860d4c7bf6add79e349013174803
629f748c72670ccf692372446b8c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757349524875381170,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lhpzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a653b87-ffd2-4e44-9149-6a55a3562d83,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad31740b5a40ce11aa72c79d8b03f7004e039d11651c0ac684e81b9315c0cf3,PodSandbo
xId:f7cd6ab39946448c99e29f699f116d69cf86098abe1a3da5ad799a950976a130,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757349505268906130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 748f19a7-41a5-4f1a-9a72-dff4a2174162,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99d847bb50459e825e8c64f96b431b250dd4e34e75f2095eb3035fcc78495b4,PodSandboxId:75aedfac
3525cc3c91447490084fd28edc58771676f711715794556e701396ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757349496345200207,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tfcgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49b047ea-0a66-4af1-b221-cb511b51732a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f289b2d369f6dc210216636cb4e3f8ef4a3e29010551272e91d5e6967fe9ae12,PodSandboxId:8fdd8c33b085e048b1cfdfc0a48933034036b9ab755fb569df601ff08af9e5bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757349495149716226,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6dnhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56138bc7-98e7-4d65-829a-1d69a7102e66,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f848565c2d06cdcdb359cdc38a1081c916249eef558157d015748df57581bf4d,PodSandboxId:a70b0b6b25aaee18b12be6028122ced63c3d123a775f5ec3d4fcdc86290c0ee4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757349483809884004,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-198632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869e2d8fef8ed9b168447dcf4910037a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-p
ort\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32722aed0cdde02b2503db43d822b9f4618039f5c36a4ad6777ca030a1d435bc,PodSandboxId:12c17d0739e79a128e52240f3f7916047119abf5de1d19752db84fb68b64b498,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757349483811921014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-198632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4077c7e95cfd57c4049d7803f77d668,},Annotations:map[
string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba8204096f16cc20b6aa0d81d3042f7bd13a22c8a84b295ef817db4026adcf0,PodSandboxId:e205012a02345889fb4cad0a89c6667539c104759e3b29cacc9f40a99b9ace6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757349483777109171,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-198632,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9d09ff6fd4ec11dd6ec58e5295f249,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e36da9d1e71c5e49505742d852dcf4613e52d6b5e38d6a4c654b6bde4a12fa3,PodSandboxId:6f660d06bb8911ddf7fbaf0c4197f765e96430317ec42142fc61cbbab237b44c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757349483721863719,Labels:map[string]s
tring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-198632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864f7f9a89c0893e7f6e63e7585d2612,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1921e89b-c6ce-4e29-8929-fe0564beffe9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.210375704Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.210966622Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.224889135Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=472fba9d-f7ee-415b-b4e4-e332f31e9de9 name=/runtime.v1.RuntimeService/Version
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.224981578Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=472fba9d-f7ee-415b-b4e4-e332f31e9de9 name=/runtime.v1.RuntimeService/Version
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.226418284Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0c4a2a7-1fbf-4987-b691-2fa186a2c7b6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.227786349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757349849227756186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0c4a2a7-1fbf-4987-b691-2fa186a2c7b6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.228785523Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0472f6d9-f997-4918-9f4c-42d25613537d name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.228947155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0472f6d9-f997-4918-9f4c-42d25613537d name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.229276689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:901f13843ff7ecab9644844ca943f4919bd4919cf01f57e32f2349dbae7884dd,PodSandboxId:78541438e1325ae9572148da2bfcab6ce4585f0cc3dd9572acc93838f2df8419,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757349704232599705,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 81e6f15d-59f8-4450-a2eb-847b8cb17a16,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ac876c9808be7d28f308f76b1fbe1537bd434cea97f258c51901bbf98b73c7,PodSandboxId:628e4e17bae98312010e9ab80df0351006a5ff62735fbf7cb9a74234f1a589fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757349658371022759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68ecb2ec-8675-4c4b-8edf-5e76a3a05382,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a261849ba738a4b9b43cee646c9df885e3394b920d14aa3795dee05cf7dd1c5b,PodSandboxId:77d05dd56f988530114e99363d8ab062141c6ccd9b8d7ec01014c198528b5adf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757349646338863352,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-sh6ww,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85869be2-4ec1-4f66-8b29-edc0503e3d9e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ff0aaf64aa7d3dc8a0192cd74f9a4a25a367dca9665dbc71c5f6224bfa0cfc7e,PodSandboxId:07eba33269ca5feaf775556fb3b9789da96602b6243b3fe48604587dabb14de2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1757349573419295357,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ts5cj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 555604c3-1cec-4a45-9c3f-674b79cf2807,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de013bc9b7ddcf19c8863e68be7adb43ac024f1db9c01c95238fbd0bea7e670c,PodSandboxId:7f950ce80fbb2fa3ea4215d45ed4f722c19fa5def395a9e876bec80458146b10,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757349572580988585,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-b8786,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 75ce458d-2c03-41b8-9c22-b5c00118dffd,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1747fe43c57c8515bded12451707bb5dcb914336701227d328fadc67dd4252,PodSandboxId:930fa56bf70706017f7f7c1aabdab168372a2ba71ec07560b4fe6e31159987cc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c08
45d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757349568623840049,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-fs8hb,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 28f4e5bf-bd39-48fa-93c9-f5c6e419bd61,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc8dac858aab976ce42f418788c0f78fe77e61ca939ccf9d7d6a5a5c46e59f5,PodSandboxId:a0553e7917f21974f98673f546530e3b1eceacdd1490e8577086e6e26609f43d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757349561545596056,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb618d0d-794b-4274-8651-39bb05d68842,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8690485777af4102a906134ba418143390f9b902deb45cf57cec4d1ae5e979be,PodSandboxId:8e5d902a4655df4ce55c3f8869d44be91f748ba2a342e19119b47134bac1bb0d,Metadata:&ContainerMetadata{Name:local-p
ath-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1757349551741514887,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-58vbw,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 03ab7b3e-f236-43db-b901-c1abbe26e293,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6f7c8b93186af76651cd3d0ba1a4f47d46bb43a7f35ada5e48943f8c4a364b,PodSandboxId:66f04a25860d4c7bf6add79e349013174803
629f748c72670ccf692372446b8c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757349524875381170,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lhpzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a653b87-ffd2-4e44-9149-6a55a3562d83,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad31740b5a40ce11aa72c79d8b03f7004e039d11651c0ac684e81b9315c0cf3,PodSandbo
xId:f7cd6ab39946448c99e29f699f116d69cf86098abe1a3da5ad799a950976a130,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757349505268906130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 748f19a7-41a5-4f1a-9a72-dff4a2174162,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99d847bb50459e825e8c64f96b431b250dd4e34e75f2095eb3035fcc78495b4,PodSandboxId:75aedfac
3525cc3c91447490084fd28edc58771676f711715794556e701396ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757349496345200207,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tfcgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49b047ea-0a66-4af1-b221-cb511b51732a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f289b2d369f6dc210216636cb4e3f8ef4a3e29010551272e91d5e6967fe9ae12,PodSandboxId:8fdd8c33b085e048b1cfdfc0a48933034036b9ab755fb569df601ff08af9e5bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757349495149716226,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6dnhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56138bc7-98e7-4d65-829a-1d69a7102e66,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f848565c2d06cdcdb359cdc38a1081c916249eef558157d015748df57581bf4d,PodSandboxId:a70b0b6b25aaee18b12be6028122ced63c3d123a775f5ec3d4fcdc86290c0ee4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757349483809884004,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-198632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869e2d8fef8ed9b168447dcf4910037a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-p
ort\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32722aed0cdde02b2503db43d822b9f4618039f5c36a4ad6777ca030a1d435bc,PodSandboxId:12c17d0739e79a128e52240f3f7916047119abf5de1d19752db84fb68b64b498,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757349483811921014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-198632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4077c7e95cfd57c4049d7803f77d668,},Annotations:map[
string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba8204096f16cc20b6aa0d81d3042f7bd13a22c8a84b295ef817db4026adcf0,PodSandboxId:e205012a02345889fb4cad0a89c6667539c104759e3b29cacc9f40a99b9ace6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757349483777109171,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-198632,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9d09ff6fd4ec11dd6ec58e5295f249,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e36da9d1e71c5e49505742d852dcf4613e52d6b5e38d6a4c654b6bde4a12fa3,PodSandboxId:6f660d06bb8911ddf7fbaf0c4197f765e96430317ec42142fc61cbbab237b44c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757349483721863719,Labels:map[string]s
tring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-198632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864f7f9a89c0893e7f6e63e7585d2612,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0472f6d9-f997-4918-9f4c-42d25613537d name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.270961844Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30f4fb27-11cb-4dee-9e6f-e005f0ddbdf8 name=/runtime.v1.RuntimeService/Version
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.271063118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30f4fb27-11cb-4dee-9e6f-e005f0ddbdf8 name=/runtime.v1.RuntimeService/Version
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.272720722Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30256727-4a87-4458-ba8d-4d2152c6cb3c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.273993147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757349849273963426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30256727-4a87-4458-ba8d-4d2152c6cb3c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.274661524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9f8859d-517c-40c7-a1b4-08ad45a38160 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.274818157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9f8859d-517c-40c7-a1b4-08ad45a38160 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 16:44:09 addons-198632 crio[824]: time="2025-09-08 16:44:09.275167447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:901f13843ff7ecab9644844ca943f4919bd4919cf01f57e32f2349dbae7884dd,PodSandboxId:78541438e1325ae9572148da2bfcab6ce4585f0cc3dd9572acc93838f2df8419,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757349704232599705,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 81e6f15d-59f8-4450-a2eb-847b8cb17a16,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ac876c9808be7d28f308f76b1fbe1537bd434cea97f258c51901bbf98b73c7,PodSandboxId:628e4e17bae98312010e9ab80df0351006a5ff62735fbf7cb9a74234f1a589fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757349658371022759,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68ecb2ec-8675-4c4b-8edf-5e76a3a05382,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a261849ba738a4b9b43cee646c9df885e3394b920d14aa3795dee05cf7dd1c5b,PodSandboxId:77d05dd56f988530114e99363d8ab062141c6ccd9b8d7ec01014c198528b5adf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757349646338863352,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-sh6ww,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85869be2-4ec1-4f66-8b29-edc0503e3d9e,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ff0aaf64aa7d3dc8a0192cd74f9a4a25a367dca9665dbc71c5f6224bfa0cfc7e,PodSandboxId:07eba33269ca5feaf775556fb3b9789da96602b6243b3fe48604587dabb14de2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1757349573419295357,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ts5cj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 555604c3-1cec-4a45-9c3f-674b79cf2807,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de013bc9b7ddcf19c8863e68be7adb43ac024f1db9c01c95238fbd0bea7e670c,PodSandboxId:7f950ce80fbb2fa3ea4215d45ed4f722c19fa5def395a9e876bec80458146b10,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757349572580988585,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-b8786,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 75ce458d-2c03-41b8-9c22-b5c00118dffd,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1747fe43c57c8515bded12451707bb5dcb914336701227d328fadc67dd4252,PodSandboxId:930fa56bf70706017f7f7c1aabdab168372a2ba71ec07560b4fe6e31159987cc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c08
45d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757349568623840049,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-fs8hb,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 28f4e5bf-bd39-48fa-93c9-f5c6e419bd61,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc8dac858aab976ce42f418788c0f78fe77e61ca939ccf9d7d6a5a5c46e59f5,PodSandboxId:a0553e7917f21974f98673f546530e3b1eceacdd1490e8577086e6e26609f43d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757349561545596056,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb618d0d-794b-4274-8651-39bb05d68842,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8690485777af4102a906134ba418143390f9b902deb45cf57cec4d1ae5e979be,PodSandboxId:8e5d902a4655df4ce55c3f8869d44be91f748ba2a342e19119b47134bac1bb0d,Metadata:&ContainerMetadata{Name:local-p
ath-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1757349551741514887,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-58vbw,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 03ab7b3e-f236-43db-b901-c1abbe26e293,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6f7c8b93186af76651cd3d0ba1a4f47d46bb43a7f35ada5e48943f8c4a364b,PodSandboxId:66f04a25860d4c7bf6add79e349013174803
629f748c72670ccf692372446b8c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757349524875381170,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lhpzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a653b87-ffd2-4e44-9149-6a55a3562d83,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad31740b5a40ce11aa72c79d8b03f7004e039d11651c0ac684e81b9315c0cf3,PodSandbo
xId:f7cd6ab39946448c99e29f699f116d69cf86098abe1a3da5ad799a950976a130,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757349505268906130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 748f19a7-41a5-4f1a-9a72-dff4a2174162,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d99d847bb50459e825e8c64f96b431b250dd4e34e75f2095eb3035fcc78495b4,PodSandboxId:75aedfac
3525cc3c91447490084fd28edc58771676f711715794556e701396ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757349496345200207,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tfcgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49b047ea-0a66-4af1-b221-cb511b51732a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f289b2d369f6dc210216636cb4e3f8ef4a3e29010551272e91d5e6967fe9ae12,PodSandboxId:8fdd8c33b085e048b1cfdfc0a48933034036b9ab755fb569df601ff08af9e5bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757349495149716226,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6dnhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56138bc7-98e7-4d65-829a-1d69a7102e66,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f848565c2d06cdcdb359cdc38a1081c916249eef558157d015748df57581bf4d,PodSandboxId:a70b0b6b25aaee18b12be6028122ced63c3d123a775f5ec3d4fcdc86290c0ee4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757349483809884004,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-198632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869e2d8fef8ed9b168447dcf4910037a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-p
ort\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32722aed0cdde02b2503db43d822b9f4618039f5c36a4ad6777ca030a1d435bc,PodSandboxId:12c17d0739e79a128e52240f3f7916047119abf5de1d19752db84fb68b64b498,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757349483811921014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-198632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4077c7e95cfd57c4049d7803f77d668,},Annotations:map[
string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba8204096f16cc20b6aa0d81d3042f7bd13a22c8a84b295ef817db4026adcf0,PodSandboxId:e205012a02345889fb4cad0a89c6667539c104759e3b29cacc9f40a99b9ace6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757349483777109171,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-198632,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9d09ff6fd4ec11dd6ec58e5295f249,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e36da9d1e71c5e49505742d852dcf4613e52d6b5e38d6a4c654b6bde4a12fa3,PodSandboxId:6f660d06bb8911ddf7fbaf0c4197f765e96430317ec42142fc61cbbab237b44c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757349483721863719,Labels:map[string]s
tring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-198632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864f7f9a89c0893e7f6e63e7585d2612,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b9f8859d-517c-40c7-a1b4-08ad45a38160 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	901f13843ff7e       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   78541438e1325       nginx
	a5ac876c9808b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   628e4e17bae98       busybox
	a261849ba738a       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   77d05dd56f988       ingress-nginx-controller-9cc49f96f-sh6ww
	ff0aaf64aa7d3       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             4 minutes ago       Exited              patch                     1                   07eba33269ca5       ingress-nginx-admission-patch-ts5cj
	de013bc9b7ddc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              create                    0                   7f950ce80fbb2       ingress-nginx-admission-create-b8786
	0f1747fe43c57       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506            4 minutes ago       Running             gadget                    0                   930fa56bf7070       gadget-fs8hb
	bdc8dac858aab       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   a0553e7917f21       kube-ingress-dns-minikube
	8690485777af4       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   8e5d902a4655d       local-path-provisioner-648f6765c9-58vbw
	4a6f7c8b93186       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   66f04a25860d4       amd-gpu-device-plugin-lhpzm
	8ad31740b5a40       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   f7cd6ab399464       storage-provisioner
	d99d847bb5045       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   75aedfac3525c       coredns-66bc5c9577-tfcgh
	f289b2d369f6d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             5 minutes ago       Running             kube-proxy                0                   8fdd8c33b085e       kube-proxy-6dnhn
	32722aed0cdde       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             6 minutes ago       Running             kube-scheduler            0                   12c17d0739e79       kube-scheduler-addons-198632
	f848565c2d06c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             6 minutes ago       Running             etcd                      0                   a70b0b6b25aae       etcd-addons-198632
	aba8204096f16       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             6 minutes ago       Running             kube-apiserver            0                   e205012a02345       kube-apiserver-addons-198632
	9e36da9d1e71c       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             6 minutes ago       Running             kube-controller-manager   0                   6f660d06bb891       kube-controller-manager-addons-198632
	
	
	==> coredns [d99d847bb50459e825e8c64f96b431b250dd4e34e75f2095eb3035fcc78495b4] <==
	[INFO] 10.244.0.8:48335 - 51912 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000148361s
	[INFO] 10.244.0.8:48335 - 12265 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000090058s
	[INFO] 10.244.0.8:48335 - 42847 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000117119s
	[INFO] 10.244.0.8:48335 - 12834 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000093877s
	[INFO] 10.244.0.8:48335 - 59987 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000273167s
	[INFO] 10.244.0.8:48335 - 2946 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000103374s
	[INFO] 10.244.0.8:48335 - 21505 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000163186s
	[INFO] 10.244.0.8:32947 - 55549 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000145858s
	[INFO] 10.244.0.8:32947 - 55236 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000362446s
	[INFO] 10.244.0.8:37888 - 4268 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00018393s
	[INFO] 10.244.0.8:37888 - 3993 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000199631s
	[INFO] 10.244.0.8:55315 - 49842 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108744s
	[INFO] 10.244.0.8:55315 - 49576 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000496175s
	[INFO] 10.244.0.8:44888 - 7752 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000095364s
	[INFO] 10.244.0.8:44888 - 7528 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154704s
	[INFO] 10.244.0.23:54620 - 11574 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000673417s
	[INFO] 10.244.0.23:49361 - 50339 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000171279s
	[INFO] 10.244.0.23:54972 - 21100 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000628876s
	[INFO] 10.244.0.23:33206 - 44617 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00057365s
	[INFO] 10.244.0.23:51958 - 50947 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000112938s
	[INFO] 10.244.0.23:35436 - 18266 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082259s
	[INFO] 10.244.0.23:52198 - 23431 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003991721s
	[INFO] 10.244.0.23:55476 - 28728 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.005673377s
	[INFO] 10.244.0.28:39323 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001376491s
	[INFO] 10.244.0.28:47500 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00146449s
	
	
	==> describe nodes <==
	Name:               addons-198632
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-198632
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6
	                    minikube.k8s.io/name=addons-198632
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T16_38_10_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-198632
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 16:38:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-198632
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 16:44:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 16:42:15 +0000   Mon, 08 Sep 2025 16:38:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 16:42:15 +0000   Mon, 08 Sep 2025 16:38:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 16:42:15 +0000   Mon, 08 Sep 2025 16:38:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 16:42:15 +0000   Mon, 08 Sep 2025 16:38:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    addons-198632
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 ddf72d62a6d34e07bb678f7a9926ed1d
	  System UUID:                ddf72d62-a6d3-4e07-bb67-8f7a9926ed1d
	  Boot ID:                    56afd3ae-9fb5-4e80-9704-97656bc28e38
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m15s
	  default                     hello-world-app-5d498dc89-cx8wd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  gadget                      gadget-fs8hb                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-sh6ww    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m46s
	  kube-system                 amd-gpu-device-plugin-lhpzm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 coredns-66bc5c9577-tfcgh                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m55s
	  kube-system                 etcd-addons-198632                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m
	  kube-system                 kube-apiserver-addons-198632                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-controller-manager-addons-198632       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-proxy-6dnhn                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 kube-scheduler-addons-198632                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  local-path-storage          local-path-provisioner-648f6765c9-58vbw     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m53s  kube-proxy       
	  Normal  Starting                 6m     kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m     kubelet          Node addons-198632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m     kubelet          Node addons-198632 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m     kubelet          Node addons-198632 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m59s  kubelet          Node addons-198632 status is now: NodeReady
	  Normal  RegisteredNode           5m56s  node-controller  Node addons-198632 event: Registered Node addons-198632 in Controller
	
	
	==> dmesg <==
	[  +0.000040] kauditd_printk_skb: 356 callbacks suppressed
	[  +1.647772] kauditd_printk_skb: 327 callbacks suppressed
	[ +11.261009] kauditd_printk_skb: 16 callbacks suppressed
	[ +10.038276] kauditd_printk_skb: 20 callbacks suppressed
	[  +7.583807] kauditd_printk_skb: 44 callbacks suppressed
	[Sep 8 16:39] kauditd_printk_skb: 32 callbacks suppressed
	[ +13.051737] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.088113] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.628408] kauditd_printk_skb: 80 callbacks suppressed
	[  +3.890132] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.860187] kauditd_printk_skb: 75 callbacks suppressed
	[Sep 8 16:40] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.000078] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.096997] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.834724] kauditd_printk_skb: 47 callbacks suppressed
	[Sep 8 16:41] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.000029] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.836740] kauditd_printk_skb: 107 callbacks suppressed
	[  +0.001148] kauditd_printk_skb: 75 callbacks suppressed
	[  +0.643000] kauditd_printk_skb: 129 callbacks suppressed
	[  +1.744451] kauditd_printk_skb: 219 callbacks suppressed
	[  +3.397621] kauditd_printk_skb: 37 callbacks suppressed
	[Sep 8 16:42] kauditd_printk_skb: 63 callbacks suppressed
	[  +7.953416] kauditd_printk_skb: 41 callbacks suppressed
	[Sep 8 16:44] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [f848565c2d06cdcdb359cdc38a1081c916249eef558157d015748df57581bf4d] <==
	{"level":"info","ts":"2025-09-08T16:39:37.843829Z","caller":"traceutil/trace.go:172","msg":"trace[508602843] linearizableReadLoop","detail":"{readStateIndex:1101; appliedIndex:1101; }","duration":"284.933629ms","start":"2025-09-08T16:39:37.558872Z","end":"2025-09-08T16:39:37.843805Z","steps":["trace[508602843] 'read index received'  (duration: 284.924239ms)","trace[508602843] 'applied index is now lower than readState.Index'  (duration: 7.715µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T16:39:37.844023Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"285.135654ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T16:39:37.844050Z","caller":"traceutil/trace.go:172","msg":"trace[1092967953] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1071; }","duration":"285.177269ms","start":"2025-09-08T16:39:37.558866Z","end":"2025-09-08T16:39:37.844043Z","steps":["trace[1092967953] 'agreement among raft nodes before linearized reading'  (duration: 285.050295ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T16:39:37.844545Z","caller":"traceutil/trace.go:172","msg":"trace[453775190] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"339.630994ms","start":"2025-09-08T16:39:37.504903Z","end":"2025-09-08T16:39:37.844534Z","steps":["trace[453775190] 'process raft request'  (duration: 338.953887ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T16:39:37.844678Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T16:39:37.504885Z","time spent":"339.686554ms","remote":"127.0.0.1:59812","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4187,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/csi-hostpath-attacher-0\" mod_revision:779 > success:<request_put:<key:\"/registry/pods/kube-system/csi-hostpath-attacher-0\" value_size:4129 >> failure:<request_range:<key:\"/registry/pods/kube-system/csi-hostpath-attacher-0\" > >"}
	{"level":"info","ts":"2025-09-08T16:40:43.300215Z","caller":"traceutil/trace.go:172","msg":"trace[1139850887] transaction","detail":"{read_only:false; response_revision:1222; number_of_response:1; }","duration":"181.953153ms","start":"2025-09-08T16:40:43.118239Z","end":"2025-09-08T16:40:43.300192Z","steps":["trace[1139850887] 'process raft request'  (duration: 181.73429ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T16:40:46.147518Z","caller":"traceutil/trace.go:172","msg":"trace[1195481328] linearizableReadLoop","detail":"{readStateIndex:1268; appliedIndex:1268; }","duration":"125.952164ms","start":"2025-09-08T16:40:46.021536Z","end":"2025-09-08T16:40:46.147488Z","steps":["trace[1195481328] 'read index received'  (duration: 125.944188ms)","trace[1195481328] 'applied index is now lower than readState.Index'  (duration: 7.194µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T16:40:46.147581Z","caller":"traceutil/trace.go:172","msg":"trace[121126500] transaction","detail":"{read_only:false; response_revision:1224; number_of_response:1; }","duration":"182.411727ms","start":"2025-09-08T16:40:45.965158Z","end":"2025-09-08T16:40:46.147569Z","steps":["trace[121126500] 'process raft request'  (duration: 182.221662ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T16:40:46.147630Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.091904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deviceclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T16:40:46.147665Z","caller":"traceutil/trace.go:172","msg":"trace[559026400] range","detail":"{range_begin:/registry/deviceclasses; range_end:; response_count:0; response_revision:1224; }","duration":"126.143612ms","start":"2025-09-08T16:40:46.021512Z","end":"2025-09-08T16:40:46.147656Z","steps":["trace[559026400] 'agreement among raft nodes before linearized reading'  (duration: 126.062063ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T16:41:21.718859Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"311.96326ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-09-08T16:41:21.722313Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"306.798054ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2025-09-08T16:41:21.722504Z","caller":"traceutil/trace.go:172","msg":"trace[187339295] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1437; }","duration":"306.994965ms","start":"2025-09-08T16:41:21.415500Z","end":"2025-09-08T16:41:21.722495Z","steps":["trace[187339295] 'range keys from in-memory index tree'  (duration: 305.508975ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T16:41:21.722539Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T16:41:21.415424Z","time spent":"307.100423ms","remote":"127.0.0.1:59968","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":1,"response size":576,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 "}
	{"level":"info","ts":"2025-09-08T16:41:21.721846Z","caller":"traceutil/trace.go:172","msg":"trace[1288004618] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1437; }","duration":"314.833093ms","start":"2025-09-08T16:41:21.406824Z","end":"2025-09-08T16:41:21.721658Z","steps":["trace[1288004618] 'range keys from in-memory index tree'  (duration: 311.906315ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T16:41:21.722908Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"295.339173ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T16:41:21.723053Z","caller":"traceutil/trace.go:172","msg":"trace[811840724] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1437; }","duration":"295.486159ms","start":"2025-09-08T16:41:21.427559Z","end":"2025-09-08T16:41:21.723045Z","steps":["trace[811840724] 'range keys from in-memory index tree'  (duration: 295.29646ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T16:41:21.722934Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T16:41:21.406809Z","time spent":"315.785233ms","remote":"127.0.0.1:59812","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-09-08T16:41:21.720531Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.78821ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T16:41:21.724222Z","caller":"traceutil/trace.go:172","msg":"trace[115577303] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1437; }","duration":"166.487144ms","start":"2025-09-08T16:41:21.557727Z","end":"2025-09-08T16:41:21.724214Z","steps":["trace[115577303] 'range keys from in-memory index tree'  (duration: 162.689426ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T16:41:31.808258Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"249.585503ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T16:41:31.808344Z","caller":"traceutil/trace.go:172","msg":"trace[1848900831] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1551; }","duration":"249.931185ms","start":"2025-09-08T16:41:31.558397Z","end":"2025-09-08T16:41:31.808329Z","steps":["trace[1848900831] 'range keys from in-memory index tree'  (duration: 249.550785ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T16:41:35.523417Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.228432ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-09-08T16:41:35.525007Z","caller":"traceutil/trace.go:172","msg":"trace[1960649550] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1614; }","duration":"103.819215ms","start":"2025-09-08T16:41:35.421165Z","end":"2025-09-08T16:41:35.524985Z","steps":["trace[1960649550] 'range keys from in-memory index tree'  (duration: 102.084762ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T16:42:03.885604Z","caller":"traceutil/trace.go:172","msg":"trace[1344838205] transaction","detail":"{read_only:false; response_revision:1740; number_of_response:1; }","duration":"145.961566ms","start":"2025-09-08T16:42:03.739620Z","end":"2025-09-08T16:42:03.885582Z","steps":["trace[1344838205] 'process raft request'  (duration: 145.871929ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:44:09 up 6 min,  0 users,  load average: 1.62, 1.30, 0.71
	Linux addons-198632 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [aba8204096f16cc20b6aa0d81d3042f7bd13a22c8a84b295ef817db4026adcf0] <==
	E0908 16:41:04.632598       1 conn.go:339] Error on socket receive: read tcp 192.168.39.229:8443->192.168.39.1:34924: use of closed network connection
	I0908 16:41:14.225205       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.73.222"}
	I0908 16:41:22.878779       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:41:33.022236       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0908 16:41:33.251203       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.8.72"}
	I0908 16:41:46.313730       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0908 16:41:55.891521       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0908 16:41:59.825203       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:42:17.203781       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 16:42:17.203920       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 16:42:17.237642       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 16:42:17.238891       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 16:42:17.246602       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 16:42:17.246668       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 16:42:17.271977       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 16:42:17.272034       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 16:42:17.300806       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 16:42:17.300850       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0908 16:42:18.247976       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0908 16:42:18.302787       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0908 16:42:18.323815       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0908 16:42:47.189668       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:43:23.685181       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:44:02.170392       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 16:44:07.902529       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.215.61"}
	
	
	==> kube-controller-manager [9e36da9d1e71c5e49505742d852dcf4613e52d6b5e38d6a4c654b6bde4a12fa3] <==
	E0908 16:42:25.210542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:42:28.812149       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:42:28.813234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:42:32.434225       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:42:32.435543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:42:34.827145       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:42:34.828819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:42:36.322711       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:42:36.323858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0908 16:42:43.830410       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0908 16:42:43.830542       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 16:42:43.871782       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0908 16:42:43.871856       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0908 16:42:53.499265       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:42:53.500346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:42:55.659184       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:42:55.660244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:42:56.506913       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:42:56.508910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:43:20.801978       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:43:20.803052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:43:24.677281       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:43:24.678760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 16:43:40.798049       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 16:43:40.799142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [f289b2d369f6dc210216636cb4e3f8ef4a3e29010551272e91d5e6967fe9ae12] <==
	I0908 16:38:15.832429       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 16:38:15.936542       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 16:38:15.936592       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.229"]
	E0908 16:38:15.936662       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 16:38:16.243651       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 16:38:16.245602       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 16:38:16.246566       1 server_linux.go:132] "Using iptables Proxier"
	I0908 16:38:16.273550       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 16:38:16.287380       1 server.go:527] "Version info" version="v1.34.0"
	I0908 16:38:16.288228       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 16:38:16.300774       1 config.go:200] "Starting service config controller"
	I0908 16:38:16.300808       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 16:38:16.300829       1 config.go:106] "Starting endpoint slice config controller"
	I0908 16:38:16.300833       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 16:38:16.300844       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 16:38:16.300848       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 16:38:16.309825       1 config.go:309] "Starting node config controller"
	I0908 16:38:16.309860       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 16:38:16.309867       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 16:38:16.401803       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 16:38:16.401846       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 16:38:16.401873       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [32722aed0cdde02b2503db43d822b9f4618039f5c36a4ad6777ca030a1d435bc] <==
	E0908 16:38:06.710641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 16:38:06.710719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 16:38:06.710780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 16:38:06.710857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 16:38:06.711032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 16:38:06.711295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 16:38:06.711301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 16:38:06.711404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 16:38:06.711484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 16:38:07.540172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 16:38:07.709326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 16:38:07.756679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 16:38:07.758515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 16:38:07.764645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 16:38:07.769635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 16:38:07.772148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 16:38:07.930344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 16:38:07.936017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 16:38:07.955864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 16:38:08.011663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 16:38:08.014050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 16:38:08.016779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0908 16:38:08.038644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 16:38:08.060082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I0908 16:38:10.095162       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 16:42:29 addons-198632 kubelet[1518]: E0908 16:42:29.778607    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757349749778027681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:42:34 addons-198632 kubelet[1518]: I0908 16:42:34.379927    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-lhpzm" secret="" err="secret \"gcp-auth\" not found"
	Sep 08 16:42:39 addons-198632 kubelet[1518]: E0908 16:42:39.781791    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757349759781266960  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:42:39 addons-198632 kubelet[1518]: E0908 16:42:39.781822    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757349759781266960  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:42:49 addons-198632 kubelet[1518]: E0908 16:42:49.785095    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757349769784595040  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:42:49 addons-198632 kubelet[1518]: E0908 16:42:49.785225    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757349769784595040  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:42:59 addons-198632 kubelet[1518]: E0908 16:42:59.788253    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757349779787637482  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:42:59 addons-198632 kubelet[1518]: E0908 16:42:59.788280    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757349779787637482  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:43:09 addons-198632 kubelet[1518]: E0908 16:43:09.791236    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757349789790945749  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:43:09 addons-198632 kubelet[1518]: E0908 16:43:09.791283    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757349789790945749  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:43:19 addons-198632 kubelet[1518]: E0908 16:43:19.795134    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757349799794558769  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:43:19 addons-198632 kubelet[1518]: E0908 16:43:19.795184    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757349799794558769  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:43:29 addons-198632 kubelet[1518]: E0908 16:43:29.797946    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757349809797394621  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:43:29 addons-198632 kubelet[1518]: E0908 16:43:29.797994    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757349809797394621  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:43:38 addons-198632 kubelet[1518]: I0908 16:43:38.379425    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-lhpzm" secret="" err="secret \"gcp-auth\" not found"
	Sep 08 16:43:39 addons-198632 kubelet[1518]: E0908 16:43:39.801157    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757349819800754878  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:43:39 addons-198632 kubelet[1518]: E0908 16:43:39.801180    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757349819800754878  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:43:44 addons-198632 kubelet[1518]: I0908 16:43:44.379797    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 08 16:43:49 addons-198632 kubelet[1518]: E0908 16:43:49.804347    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757349829803854248  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:43:49 addons-198632 kubelet[1518]: E0908 16:43:49.804390    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757349829803854248  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:43:59 addons-198632 kubelet[1518]: E0908 16:43:59.807622    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757349839807162139  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:43:59 addons-198632 kubelet[1518]: E0908 16:43:59.807665    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757349839807162139  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:44:07 addons-198632 kubelet[1518]: I0908 16:44:07.792959    1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq9vp\" (UniqueName: \"kubernetes.io/projected/3390f15d-2950-48d7-abcb-18127d6b7a6f-kube-api-access-sq9vp\") pod \"hello-world-app-5d498dc89-cx8wd\" (UID: \"3390f15d-2950-48d7-abcb-18127d6b7a6f\") " pod="default/hello-world-app-5d498dc89-cx8wd"
	Sep 08 16:44:09 addons-198632 kubelet[1518]: E0908 16:44:09.811215    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757349849810484437  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 08 16:44:09 addons-198632 kubelet[1518]: E0908 16:44:09.811891    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757349849810484437  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	
	
	==> storage-provisioner [8ad31740b5a40ce11aa72c79d8b03f7004e039d11651c0ac684e81b9315c0cf3] <==
	W0908 16:43:44.467240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:43:46.470394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:43:46.478754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:43:48.481802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:43:48.488497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:43:50.492827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:43:50.501292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:43:52.504364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:43:52.511426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:43:54.514536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:43:54.521939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:43:56.525820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:43:56.532267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:43:58.535987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:43:58.543734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:44:00.547379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:44:00.553416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:44:02.557353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:44:02.566162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:44:04.569742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:44:04.575762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:44:06.578947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:44:06.587388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:44:08.590153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 16:44:08.595704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-198632 -n addons-198632
helpers_test.go:269: (dbg) Run:  kubectl --context addons-198632 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-cx8wd ingress-nginx-admission-create-b8786 ingress-nginx-admission-patch-ts5cj
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-198632 describe pod hello-world-app-5d498dc89-cx8wd ingress-nginx-admission-create-b8786 ingress-nginx-admission-patch-ts5cj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-198632 describe pod hello-world-app-5d498dc89-cx8wd ingress-nginx-admission-create-b8786 ingress-nginx-admission-patch-ts5cj: exit status 1 (74.770014ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-cx8wd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-198632/192.168.39.229
	Start Time:       Mon, 08 Sep 2025 16:44:07 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sq9vp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sq9vp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-cx8wd to addons-198632
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-b8786" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ts5cj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-198632 describe pod hello-world-app-5d498dc89-cx8wd ingress-nginx-admission-create-b8786 ingress-nginx-admission-patch-ts5cj: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198632 addons disable ingress-dns --alsologtostderr -v=1: (1.58664987s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198632 addons disable ingress --alsologtostderr -v=1: (7.803377046s)
--- FAIL: TestAddons/parallel/Ingress (167.26s)

                                                
                                    
x
+
TestPreload (160.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-547294 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E0908 17:34:48.238726   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-547294 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m24.127886672s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-547294 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-547294 image pull gcr.io/k8s-minikube/busybox: (3.465523417s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-547294
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-547294: (7.299641775s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-547294 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0908 17:35:37.369446   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:35:54.297606   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-547294 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.107408361s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-547294 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-09-08 17:36:05.352301155 +0000 UTC m=+3565.339675883
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-547294 -n test-preload-547294
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-547294 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-547294 logs -n 25: (1.150477084s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-079335 ssh -n multinode-079335-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:21 UTC │ 08 Sep 25 17:21 UTC │
	│ ssh     │ multinode-079335 ssh -n multinode-079335 sudo cat /home/docker/cp-test_multinode-079335-m03_multinode-079335.txt                                          │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:21 UTC │ 08 Sep 25 17:21 UTC │
	│ cp      │ multinode-079335 cp multinode-079335-m03:/home/docker/cp-test.txt multinode-079335-m02:/home/docker/cp-test_multinode-079335-m03_multinode-079335-m02.txt │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:21 UTC │ 08 Sep 25 17:21 UTC │
	│ ssh     │ multinode-079335 ssh -n multinode-079335-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:21 UTC │ 08 Sep 25 17:21 UTC │
	│ ssh     │ multinode-079335 ssh -n multinode-079335-m02 sudo cat /home/docker/cp-test_multinode-079335-m03_multinode-079335-m02.txt                                  │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:21 UTC │ 08 Sep 25 17:21 UTC │
	│ node    │ multinode-079335 node stop m03                                                                                                                            │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:21 UTC │ 08 Sep 25 17:21 UTC │
	│ node    │ multinode-079335 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:21 UTC │ 08 Sep 25 17:22 UTC │
	│ node    │ list -p multinode-079335                                                                                                                                  │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:22 UTC │                     │
	│ stop    │ -p multinode-079335                                                                                                                                       │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:22 UTC │ 08 Sep 25 17:25 UTC │
	│ start   │ -p multinode-079335 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:25 UTC │ 08 Sep 25 17:27 UTC │
	│ node    │ list -p multinode-079335                                                                                                                                  │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:27 UTC │                     │
	│ node    │ multinode-079335 node delete m03                                                                                                                          │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:27 UTC │ 08 Sep 25 17:27 UTC │
	│ stop    │ multinode-079335 stop                                                                                                                                     │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:28 UTC │ 08 Sep 25 17:31 UTC │
	│ start   │ -p multinode-079335 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:31 UTC │ 08 Sep 25 17:32 UTC │
	│ node    │ list -p multinode-079335                                                                                                                                  │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:32 UTC │                     │
	│ start   │ -p multinode-079335-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-079335-m02 │ jenkins │ v1.36.0 │ 08 Sep 25 17:32 UTC │                     │
	│ start   │ -p multinode-079335-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-079335-m03 │ jenkins │ v1.36.0 │ 08 Sep 25 17:32 UTC │ 08 Sep 25 17:33 UTC │
	│ node    │ add -p multinode-079335                                                                                                                                   │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:33 UTC │                     │
	│ delete  │ -p multinode-079335-m03                                                                                                                                   │ multinode-079335-m03 │ jenkins │ v1.36.0 │ 08 Sep 25 17:33 UTC │ 08 Sep 25 17:33 UTC │
	│ delete  │ -p multinode-079335                                                                                                                                       │ multinode-079335     │ jenkins │ v1.36.0 │ 08 Sep 25 17:33 UTC │ 08 Sep 25 17:33 UTC │
	│ start   │ -p test-preload-547294 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-547294  │ jenkins │ v1.36.0 │ 08 Sep 25 17:33 UTC │ 08 Sep 25 17:34 UTC │
	│ image   │ test-preload-547294 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-547294  │ jenkins │ v1.36.0 │ 08 Sep 25 17:34 UTC │ 08 Sep 25 17:34 UTC │
	│ stop    │ -p test-preload-547294                                                                                                                                    │ test-preload-547294  │ jenkins │ v1.36.0 │ 08 Sep 25 17:34 UTC │ 08 Sep 25 17:35 UTC │
	│ start   │ -p test-preload-547294 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-547294  │ jenkins │ v1.36.0 │ 08 Sep 25 17:35 UTC │ 08 Sep 25 17:36 UTC │
	│ image   │ test-preload-547294 image list                                                                                                                            │ test-preload-547294  │ jenkins │ v1.36.0 │ 08 Sep 25 17:36 UTC │ 08 Sep 25 17:36 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 17:35:03
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 17:35:03.068588   43239 out.go:360] Setting OutFile to fd 1 ...
	I0908 17:35:03.068822   43239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:35:03.068830   43239 out.go:374] Setting ErrFile to fd 2...
	I0908 17:35:03.068834   43239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:35:03.069010   43239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
	I0908 17:35:03.069539   43239 out.go:368] Setting JSON to false
	I0908 17:35:03.070425   43239 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4646,"bootTime":1757348257,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 17:35:03.070478   43239 start.go:140] virtualization: kvm guest
	I0908 17:35:03.072800   43239 out.go:179] * [test-preload-547294] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 17:35:03.074252   43239 notify.go:220] Checking for updates...
	I0908 17:35:03.074309   43239 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 17:35:03.075652   43239 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 17:35:03.077046   43239 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 17:35:03.078322   43239 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 17:35:03.079646   43239 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 17:35:03.081094   43239 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 17:35:03.083271   43239 config.go:182] Loaded profile config "test-preload-547294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0908 17:35:03.083842   43239 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:35:03.083916   43239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:35:03.099356   43239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40525
	I0908 17:35:03.099837   43239 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:35:03.100399   43239 main.go:141] libmachine: Using API Version  1
	I0908 17:35:03.100438   43239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:35:03.100810   43239 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:35:03.101031   43239 main.go:141] libmachine: (test-preload-547294) Calling .DriverName
	I0908 17:35:03.103104   43239 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0908 17:35:03.104532   43239 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 17:35:03.104955   43239 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:35:03.105003   43239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:35:03.120168   43239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38467
	I0908 17:35:03.120799   43239 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:35:03.121248   43239 main.go:141] libmachine: Using API Version  1
	I0908 17:35:03.121275   43239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:35:03.121637   43239 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:35:03.121824   43239 main.go:141] libmachine: (test-preload-547294) Calling .DriverName
	I0908 17:35:03.158169   43239 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 17:35:03.159539   43239 start.go:304] selected driver: kvm2
	I0908 17:35:03.159558   43239 start.go:918] validating driver "kvm2" against &{Name:test-preload-547294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-547294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 17:35:03.159670   43239 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 17:35:03.160368   43239 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 17:35:03.160458   43239 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21504-7629/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 17:35:03.176503   43239 install.go:137] /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 17:35:03.176911   43239 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 17:35:03.176941   43239 cni.go:84] Creating CNI manager for ""
	I0908 17:35:03.177008   43239 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 17:35:03.177054   43239 start.go:348] cluster config:
	{Name:test-preload-547294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-547294 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 17:35:03.177152   43239 iso.go:125] acquiring lock: {Name:mkaf49872b434993209a65bf0f93ea3e4c6d93b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 17:35:03.179105   43239 out.go:179] * Starting "test-preload-547294" primary control-plane node in "test-preload-547294" cluster
	I0908 17:35:03.180389   43239 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0908 17:35:03.652172   43239 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0908 17:35:03.652199   43239 cache.go:58] Caching tarball of preloaded images
	I0908 17:35:03.652351   43239 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0908 17:35:03.654185   43239 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I0908 17:35:03.655534   43239 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 17:35:03.769941   43239 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0908 17:35:18.596629   43239 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 17:35:18.597503   43239 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 17:35:19.331991   43239 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0908 17:35:19.332138   43239 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/config.json ...
	I0908 17:35:19.332428   43239 start.go:360] acquireMachinesLock for test-preload-547294: {Name:mka7c3ca4a3e37e9483e7804183d91c6725d32e4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 17:35:19.332506   43239 start.go:364] duration metric: took 50.818µs to acquireMachinesLock for "test-preload-547294"
	I0908 17:35:19.332530   43239 start.go:96] Skipping create...Using existing machine configuration
	I0908 17:35:19.332538   43239 fix.go:54] fixHost starting: 
	I0908 17:35:19.332837   43239 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:35:19.332880   43239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:35:19.347578   43239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0908 17:35:19.348026   43239 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:35:19.348448   43239 main.go:141] libmachine: Using API Version  1
	I0908 17:35:19.348468   43239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:35:19.348799   43239 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:35:19.349018   43239 main.go:141] libmachine: (test-preload-547294) Calling .DriverName
	I0908 17:35:19.349137   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetState
	I0908 17:35:19.350993   43239 fix.go:112] recreateIfNeeded on test-preload-547294: state=Stopped err=<nil>
	I0908 17:35:19.351025   43239 main.go:141] libmachine: (test-preload-547294) Calling .DriverName
	W0908 17:35:19.351149   43239 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 17:35:19.353028   43239 out.go:252] * Restarting existing kvm2 VM for "test-preload-547294" ...
	I0908 17:35:19.353055   43239 main.go:141] libmachine: (test-preload-547294) Calling .Start
	I0908 17:35:19.353220   43239 main.go:141] libmachine: (test-preload-547294) starting domain...
	I0908 17:35:19.353238   43239 main.go:141] libmachine: (test-preload-547294) ensuring networks are active...
	I0908 17:35:19.353904   43239 main.go:141] libmachine: (test-preload-547294) Ensuring network default is active
	I0908 17:35:19.354202   43239 main.go:141] libmachine: (test-preload-547294) Ensuring network mk-test-preload-547294 is active
	I0908 17:35:19.354535   43239 main.go:141] libmachine: (test-preload-547294) getting domain XML...
	I0908 17:35:19.355341   43239 main.go:141] libmachine: (test-preload-547294) creating domain...
	I0908 17:35:20.568258   43239 main.go:141] libmachine: (test-preload-547294) waiting for IP...
	I0908 17:35:20.569242   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:20.569753   43239 main.go:141] libmachine: (test-preload-547294) DBG | unable to find current IP address of domain test-preload-547294 in network mk-test-preload-547294
	I0908 17:35:20.569850   43239 main.go:141] libmachine: (test-preload-547294) DBG | I0908 17:35:20.569763   43322 retry.go:31] will retry after 227.746599ms: waiting for domain to come up
	I0908 17:35:20.799481   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:20.799895   43239 main.go:141] libmachine: (test-preload-547294) DBG | unable to find current IP address of domain test-preload-547294 in network mk-test-preload-547294
	I0908 17:35:20.799918   43239 main.go:141] libmachine: (test-preload-547294) DBG | I0908 17:35:20.799852   43322 retry.go:31] will retry after 289.586544ms: waiting for domain to come up
	I0908 17:35:21.091625   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:21.092057   43239 main.go:141] libmachine: (test-preload-547294) DBG | unable to find current IP address of domain test-preload-547294 in network mk-test-preload-547294
	I0908 17:35:21.092081   43239 main.go:141] libmachine: (test-preload-547294) DBG | I0908 17:35:21.092036   43322 retry.go:31] will retry after 297.452348ms: waiting for domain to come up
	I0908 17:35:21.391568   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:21.391963   43239 main.go:141] libmachine: (test-preload-547294) DBG | unable to find current IP address of domain test-preload-547294 in network mk-test-preload-547294
	I0908 17:35:21.391991   43239 main.go:141] libmachine: (test-preload-547294) DBG | I0908 17:35:21.391947   43322 retry.go:31] will retry after 387.568227ms: waiting for domain to come up
	I0908 17:35:21.781600   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:21.782034   43239 main.go:141] libmachine: (test-preload-547294) DBG | unable to find current IP address of domain test-preload-547294 in network mk-test-preload-547294
	I0908 17:35:21.782061   43239 main.go:141] libmachine: (test-preload-547294) DBG | I0908 17:35:21.782001   43322 retry.go:31] will retry after 551.107701ms: waiting for domain to come up
	I0908 17:35:22.335036   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:22.335421   43239 main.go:141] libmachine: (test-preload-547294) DBG | unable to find current IP address of domain test-preload-547294 in network mk-test-preload-547294
	I0908 17:35:22.335468   43239 main.go:141] libmachine: (test-preload-547294) DBG | I0908 17:35:22.335414   43322 retry.go:31] will retry after 727.873585ms: waiting for domain to come up
	I0908 17:35:23.065291   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:23.065684   43239 main.go:141] libmachine: (test-preload-547294) DBG | unable to find current IP address of domain test-preload-547294 in network mk-test-preload-547294
	I0908 17:35:23.065707   43239 main.go:141] libmachine: (test-preload-547294) DBG | I0908 17:35:23.065635   43322 retry.go:31] will retry after 1.052773487s: waiting for domain to come up
	I0908 17:35:24.119895   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:24.120627   43239 main.go:141] libmachine: (test-preload-547294) DBG | unable to find current IP address of domain test-preload-547294 in network mk-test-preload-547294
	I0908 17:35:24.120654   43239 main.go:141] libmachine: (test-preload-547294) DBG | I0908 17:35:24.120561   43322 retry.go:31] will retry after 908.762132ms: waiting for domain to come up
	I0908 17:35:25.031414   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:25.031912   43239 main.go:141] libmachine: (test-preload-547294) DBG | unable to find current IP address of domain test-preload-547294 in network mk-test-preload-547294
	I0908 17:35:25.031957   43239 main.go:141] libmachine: (test-preload-547294) DBG | I0908 17:35:25.031898   43322 retry.go:31] will retry after 1.604189525s: waiting for domain to come up
	I0908 17:35:26.638745   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:26.639424   43239 main.go:141] libmachine: (test-preload-547294) DBG | unable to find current IP address of domain test-preload-547294 in network mk-test-preload-547294
	I0908 17:35:26.639521   43239 main.go:141] libmachine: (test-preload-547294) DBG | I0908 17:35:26.639419   43322 retry.go:31] will retry after 2.176130195s: waiting for domain to come up
	I0908 17:35:28.819218   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:28.819716   43239 main.go:141] libmachine: (test-preload-547294) DBG | unable to find current IP address of domain test-preload-547294 in network mk-test-preload-547294
	I0908 17:35:28.819752   43239 main.go:141] libmachine: (test-preload-547294) DBG | I0908 17:35:28.819686   43322 retry.go:31] will retry after 1.800893982s: waiting for domain to come up
	I0908 17:35:30.622385   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:30.622914   43239 main.go:141] libmachine: (test-preload-547294) DBG | unable to find current IP address of domain test-preload-547294 in network mk-test-preload-547294
	I0908 17:35:30.622937   43239 main.go:141] libmachine: (test-preload-547294) DBG | I0908 17:35:30.622892   43322 retry.go:31] will retry after 2.240259936s: waiting for domain to come up
	I0908 17:35:32.866545   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:32.867030   43239 main.go:141] libmachine: (test-preload-547294) DBG | unable to find current IP address of domain test-preload-547294 in network mk-test-preload-547294
	I0908 17:35:32.867059   43239 main.go:141] libmachine: (test-preload-547294) DBG | I0908 17:35:32.866973   43322 retry.go:31] will retry after 4.370905212s: waiting for domain to come up
	I0908 17:35:37.241125   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.241577   43239 main.go:141] libmachine: (test-preload-547294) found domain IP: 192.168.39.30
	I0908 17:35:37.241608   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has current primary IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.241617   43239 main.go:141] libmachine: (test-preload-547294) reserving static IP address...
	I0908 17:35:37.241989   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "test-preload-547294", mac: "52:54:00:05:b9:d2", ip: "192.168.39.30"} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:37.242017   43239 main.go:141] libmachine: (test-preload-547294) DBG | skip adding static IP to network mk-test-preload-547294 - found existing host DHCP lease matching {name: "test-preload-547294", mac: "52:54:00:05:b9:d2", ip: "192.168.39.30"}
	I0908 17:35:37.242031   43239 main.go:141] libmachine: (test-preload-547294) reserved static IP address 192.168.39.30 for domain test-preload-547294
	I0908 17:35:37.242049   43239 main.go:141] libmachine: (test-preload-547294) waiting for SSH...
	I0908 17:35:37.242063   43239 main.go:141] libmachine: (test-preload-547294) DBG | Getting to WaitForSSH function...
	I0908 17:35:37.244332   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.244710   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:37.244738   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.244926   43239 main.go:141] libmachine: (test-preload-547294) DBG | Using SSH client type: external
	I0908 17:35:37.244963   43239 main.go:141] libmachine: (test-preload-547294) DBG | Using SSH private key: /home/jenkins/minikube-integration/21504-7629/.minikube/machines/test-preload-547294/id_rsa (-rw-------)
	I0908 17:35:37.244990   43239 main.go:141] libmachine: (test-preload-547294) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21504-7629/.minikube/machines/test-preload-547294/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0908 17:35:37.245003   43239 main.go:141] libmachine: (test-preload-547294) DBG | About to run SSH command:
	I0908 17:35:37.245009   43239 main.go:141] libmachine: (test-preload-547294) DBG | exit 0
	I0908 17:35:37.372105   43239 main.go:141] libmachine: (test-preload-547294) DBG | SSH cmd err, output: <nil>: 
	I0908 17:35:37.372461   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetConfigRaw
	I0908 17:35:37.373100   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetIP
	I0908 17:35:37.375651   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.375960   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:37.375993   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.376193   43239 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/config.json ...
	I0908 17:35:37.376482   43239 machine.go:93] provisionDockerMachine start ...
	I0908 17:35:37.376505   43239 main.go:141] libmachine: (test-preload-547294) Calling .DriverName
	I0908 17:35:37.376755   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHHostname
	I0908 17:35:37.379294   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.379683   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:37.379708   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.379876   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHPort
	I0908 17:35:37.380062   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:37.380245   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:37.380393   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHUsername
	I0908 17:35:37.380544   43239 main.go:141] libmachine: Using SSH client type: native
	I0908 17:35:37.380804   43239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0908 17:35:37.380817   43239 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 17:35:37.496134   43239 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0908 17:35:37.496168   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetMachineName
	I0908 17:35:37.496518   43239 buildroot.go:166] provisioning hostname "test-preload-547294"
	I0908 17:35:37.496543   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetMachineName
	I0908 17:35:37.496744   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHHostname
	I0908 17:35:37.499658   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.500010   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:37.500037   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.500208   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHPort
	I0908 17:35:37.500391   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:37.500610   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:37.500759   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHUsername
	I0908 17:35:37.500958   43239 main.go:141] libmachine: Using SSH client type: native
	I0908 17:35:37.501169   43239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0908 17:35:37.501186   43239 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-547294 && echo "test-preload-547294" | sudo tee /etc/hostname
	I0908 17:35:37.639079   43239 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-547294
	
	I0908 17:35:37.639116   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHHostname
	I0908 17:35:37.642135   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.642533   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:37.642557   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.642764   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHPort
	I0908 17:35:37.642947   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:37.643111   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:37.643245   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHUsername
	I0908 17:35:37.643432   43239 main.go:141] libmachine: Using SSH client type: native
	I0908 17:35:37.643619   43239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0908 17:35:37.643636   43239 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-547294' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-547294/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-547294' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 17:35:37.766244   43239 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 17:35:37.766324   43239 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21504-7629/.minikube CaCertPath:/home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21504-7629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21504-7629/.minikube}
	I0908 17:35:37.766377   43239 buildroot.go:174] setting up certificates
	I0908 17:35:37.766392   43239 provision.go:84] configureAuth start
	I0908 17:35:37.766408   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetMachineName
	I0908 17:35:37.766770   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetIP
	I0908 17:35:37.770035   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.770520   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:37.770555   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.770742   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHHostname
	I0908 17:35:37.773173   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.773526   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:37.773580   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.773705   43239 provision.go:143] copyHostCerts
	I0908 17:35:37.773783   43239 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-7629/.minikube/cert.pem, removing ...
	I0908 17:35:37.773804   43239 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-7629/.minikube/cert.pem
	I0908 17:35:37.773909   43239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21504-7629/.minikube/cert.pem (1123 bytes)
	I0908 17:35:37.774059   43239 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-7629/.minikube/key.pem, removing ...
	I0908 17:35:37.774074   43239 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-7629/.minikube/key.pem
	I0908 17:35:37.774117   43239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21504-7629/.minikube/key.pem (1679 bytes)
	I0908 17:35:37.774210   43239 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-7629/.minikube/ca.pem, removing ...
	I0908 17:35:37.774221   43239 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-7629/.minikube/ca.pem
	I0908 17:35:37.774259   43239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21504-7629/.minikube/ca.pem (1078 bytes)
	I0908 17:35:37.774338   43239 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21504-7629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca-key.pem org=jenkins.test-preload-547294 san=[127.0.0.1 192.168.39.30 localhost minikube test-preload-547294]
	I0908 17:35:37.831335   43239 provision.go:177] copyRemoteCerts
	I0908 17:35:37.831396   43239 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 17:35:37.831419   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHHostname
	I0908 17:35:37.834296   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.834726   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:37.834757   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:37.834949   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHPort
	I0908 17:35:37.835212   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:37.835377   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHUsername
	I0908 17:35:37.835512   43239 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/test-preload-547294/id_rsa Username:docker}
	I0908 17:35:37.923966   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 17:35:37.961728   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0908 17:35:37.992677   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 17:35:38.022845   43239 provision.go:87] duration metric: took 256.44003ms to configureAuth
	I0908 17:35:38.022871   43239 buildroot.go:189] setting minikube options for container-runtime
	I0908 17:35:38.023030   43239 config.go:182] Loaded profile config "test-preload-547294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0908 17:35:38.023095   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHHostname
	I0908 17:35:38.026041   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:38.026352   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:38.026379   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:38.026565   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHPort
	I0908 17:35:38.026790   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:38.026981   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:38.027134   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHUsername
	I0908 17:35:38.027290   43239 main.go:141] libmachine: Using SSH client type: native
	I0908 17:35:38.027551   43239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0908 17:35:38.027568   43239 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 17:35:38.282094   43239 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 17:35:38.282119   43239 machine.go:96] duration metric: took 905.622469ms to provisionDockerMachine
	I0908 17:35:38.282134   43239 start.go:293] postStartSetup for "test-preload-547294" (driver="kvm2")
	I0908 17:35:38.282148   43239 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 17:35:38.282168   43239 main.go:141] libmachine: (test-preload-547294) Calling .DriverName
	I0908 17:35:38.282507   43239 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 17:35:38.282538   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHHostname
	I0908 17:35:38.285489   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:38.285833   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:38.285859   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:38.286020   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHPort
	I0908 17:35:38.286214   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:38.286369   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHUsername
	I0908 17:35:38.286503   43239 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/test-preload-547294/id_rsa Username:docker}
	I0908 17:35:38.376608   43239 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 17:35:38.381713   43239 info.go:137] Remote host: Buildroot 2025.02
	I0908 17:35:38.381741   43239 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-7629/.minikube/addons for local assets ...
	I0908 17:35:38.381837   43239 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-7629/.minikube/files for local assets ...
	I0908 17:35:38.381918   43239 filesync.go:149] local asset: /home/jenkins/minikube-integration/21504-7629/.minikube/files/etc/ssl/certs/117812.pem -> 117812.pem in /etc/ssl/certs
	I0908 17:35:38.382004   43239 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 17:35:38.394452   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/files/etc/ssl/certs/117812.pem --> /etc/ssl/certs/117812.pem (1708 bytes)
	I0908 17:35:38.425808   43239 start.go:296] duration metric: took 143.657911ms for postStartSetup
	I0908 17:35:38.425849   43239 fix.go:56] duration metric: took 19.093312409s for fixHost
	I0908 17:35:38.425873   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHHostname
	I0908 17:35:38.428998   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:38.429341   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:38.429363   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:38.429533   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHPort
	I0908 17:35:38.429752   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:38.429923   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:38.430058   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHUsername
	I0908 17:35:38.430216   43239 main.go:141] libmachine: Using SSH client type: native
	I0908 17:35:38.430419   43239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0908 17:35:38.430429   43239 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 17:35:38.544771   43239 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757352938.499383862
	
	I0908 17:35:38.544794   43239 fix.go:216] guest clock: 1757352938.499383862
	I0908 17:35:38.544804   43239 fix.go:229] Guest: 2025-09-08 17:35:38.499383862 +0000 UTC Remote: 2025-09-08 17:35:38.425853528 +0000 UTC m=+35.392981514 (delta=73.530334ms)
	I0908 17:35:38.544849   43239 fix.go:200] guest clock delta is within tolerance: 73.530334ms
	I0908 17:35:38.544857   43239 start.go:83] releasing machines lock for "test-preload-547294", held for 19.212335883s
	I0908 17:35:38.544882   43239 main.go:141] libmachine: (test-preload-547294) Calling .DriverName
	I0908 17:35:38.545137   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetIP
	I0908 17:35:38.548041   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:38.548410   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:38.548438   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:38.548604   43239 main.go:141] libmachine: (test-preload-547294) Calling .DriverName
	I0908 17:35:38.549132   43239 main.go:141] libmachine: (test-preload-547294) Calling .DriverName
	I0908 17:35:38.549457   43239 main.go:141] libmachine: (test-preload-547294) Calling .DriverName
	I0908 17:35:38.549555   43239 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 17:35:38.549596   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHHostname
	I0908 17:35:38.549712   43239 ssh_runner.go:195] Run: cat /version.json
	I0908 17:35:38.549734   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHHostname
	I0908 17:35:38.552456   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:38.552544   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:38.552820   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:38.552843   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:38.552872   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:38.552894   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:38.553021   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHPort
	I0908 17:35:38.553208   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHPort
	I0908 17:35:38.553234   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:38.553461   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:38.553459   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHUsername
	I0908 17:35:38.553636   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHUsername
	I0908 17:35:38.553650   43239 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/test-preload-547294/id_rsa Username:docker}
	I0908 17:35:38.553757   43239 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/test-preload-547294/id_rsa Username:docker}
	I0908 17:35:38.636426   43239 ssh_runner.go:195] Run: systemctl --version
	I0908 17:35:38.667582   43239 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 17:35:38.815970   43239 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 17:35:38.823011   43239 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 17:35:38.823075   43239 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 17:35:38.843502   43239 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0908 17:35:38.843527   43239 start.go:495] detecting cgroup driver to use...
	I0908 17:35:38.843584   43239 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 17:35:38.863331   43239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 17:35:38.880841   43239 docker.go:218] disabling cri-docker service (if available) ...
	I0908 17:35:38.880897   43239 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 17:35:38.899944   43239 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 17:35:38.916662   43239 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 17:35:39.064527   43239 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 17:35:39.271574   43239 docker.go:234] disabling docker service ...
	I0908 17:35:39.271661   43239 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 17:35:39.288848   43239 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 17:35:39.305192   43239 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 17:35:39.460865   43239 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 17:35:39.601631   43239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 17:35:39.617719   43239 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 17:35:39.641686   43239 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0908 17:35:39.641753   43239 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 17:35:39.654961   43239 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 17:35:39.655036   43239 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 17:35:39.668646   43239 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 17:35:39.682171   43239 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 17:35:39.695511   43239 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 17:35:39.709567   43239 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 17:35:39.722663   43239 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 17:35:39.744401   43239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 17:35:39.757748   43239 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 17:35:39.768790   43239 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0908 17:35:39.768862   43239 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0908 17:35:39.791610   43239 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 17:35:39.804476   43239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 17:35:39.953132   43239 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 17:35:40.080126   43239 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 17:35:40.080213   43239 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 17:35:40.086067   43239 start.go:563] Will wait 60s for crictl version
	I0908 17:35:40.086122   43239 ssh_runner.go:195] Run: which crictl
	I0908 17:35:40.091346   43239 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 17:35:40.135218   43239 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 17:35:40.135289   43239 ssh_runner.go:195] Run: crio --version
	I0908 17:35:40.164747   43239 ssh_runner.go:195] Run: crio --version
	I0908 17:35:40.196662   43239 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0908 17:35:40.197749   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetIP
	I0908 17:35:40.200690   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:40.201063   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:40.201089   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:40.201312   43239 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0908 17:35:40.206217   43239 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 17:35:40.222310   43239 kubeadm.go:875] updating cluster {Name:test-preload-547294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-547294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 17:35:40.222415   43239 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0908 17:35:40.222465   43239 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 17:35:40.265778   43239 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0908 17:35:40.265838   43239 ssh_runner.go:195] Run: which lz4
	I0908 17:35:40.270280   43239 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0908 17:35:40.275151   43239 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0908 17:35:40.275181   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0908 17:35:41.886410   43239 crio.go:462] duration metric: took 1.616157006s to copy over tarball
	I0908 17:35:41.886481   43239 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0908 17:35:43.628285   43239 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.74177579s)
	I0908 17:35:43.628351   43239 crio.go:469] duration metric: took 1.741913979s to extract the tarball
	I0908 17:35:43.628362   43239 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0908 17:35:43.668210   43239 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 17:35:43.712095   43239 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 17:35:43.712125   43239 cache_images.go:85] Images are preloaded, skipping loading
	I0908 17:35:43.712134   43239 kubeadm.go:926] updating node { 192.168.39.30 8443 v1.32.0 crio true true} ...
	I0908 17:35:43.712239   43239 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-547294 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-547294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 17:35:43.712299   43239 ssh_runner.go:195] Run: crio config
	I0908 17:35:43.759008   43239 cni.go:84] Creating CNI manager for ""
	I0908 17:35:43.759032   43239 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 17:35:43.759043   43239 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 17:35:43.759062   43239 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.30 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-547294 NodeName:test-preload-547294 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 17:35:43.759208   43239 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-547294"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.30"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.30"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 17:35:43.759276   43239 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0908 17:35:43.771571   43239 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 17:35:43.771629   43239 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 17:35:43.783266   43239 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0908 17:35:43.804152   43239 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 17:35:43.824807   43239 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I0908 17:35:43.846488   43239 ssh_runner.go:195] Run: grep 192.168.39.30	control-plane.minikube.internal$ /etc/hosts
	I0908 17:35:43.850930   43239 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 17:35:43.866207   43239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 17:35:44.015424   43239 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 17:35:44.056577   43239 certs.go:68] Setting up /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294 for IP: 192.168.39.30
	I0908 17:35:44.056601   43239 certs.go:194] generating shared ca certs ...
	I0908 17:35:44.056617   43239 certs.go:226] acquiring lock for ca certs: {Name:mk97fb352a8636fddbcae5a6f40efc0f573cd949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 17:35:44.056776   43239 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21504-7629/.minikube/ca.key
	I0908 17:35:44.056828   43239 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21504-7629/.minikube/proxy-client-ca.key
	I0908 17:35:44.056841   43239 certs.go:256] generating profile certs ...
	I0908 17:35:44.056935   43239 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/client.key
	I0908 17:35:44.057009   43239 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/apiserver.key.84b13a81
	I0908 17:35:44.057068   43239 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/proxy-client.key
	I0908 17:35:44.057223   43239 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/11781.pem (1338 bytes)
	W0908 17:35:44.057262   43239 certs.go:480] ignoring /home/jenkins/minikube-integration/21504-7629/.minikube/certs/11781_empty.pem, impossibly tiny 0 bytes
	I0908 17:35:44.057277   43239 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca-key.pem (1671 bytes)
	I0908 17:35:44.057316   43239 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem (1078 bytes)
	I0908 17:35:44.057347   43239 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/cert.pem (1123 bytes)
	I0908 17:35:44.057377   43239 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/key.pem (1679 bytes)
	I0908 17:35:44.057429   43239 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/files/etc/ssl/certs/117812.pem (1708 bytes)
	I0908 17:35:44.058104   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 17:35:44.094481   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 17:35:44.128403   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 17:35:44.157811   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 17:35:44.187451   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0908 17:35:44.217711   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 17:35:44.248321   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 17:35:44.278210   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 17:35:44.308481   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/files/etc/ssl/certs/117812.pem --> /usr/share/ca-certificates/117812.pem (1708 bytes)
	I0908 17:35:44.337468   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 17:35:44.366511   43239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/certs/11781.pem --> /usr/share/ca-certificates/11781.pem (1338 bytes)
	I0908 17:35:44.395027   43239 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 17:35:44.416288   43239 ssh_runner.go:195] Run: openssl version
	I0908 17:35:44.422774   43239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11781.pem && ln -fs /usr/share/ca-certificates/11781.pem /etc/ssl/certs/11781.pem"
	I0908 17:35:44.438549   43239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11781.pem
	I0908 17:35:44.444353   43239 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 16:46 /usr/share/ca-certificates/11781.pem
	I0908 17:35:44.444422   43239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11781.pem
	I0908 17:35:44.452069   43239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11781.pem /etc/ssl/certs/51391683.0"
	I0908 17:35:44.467276   43239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117812.pem && ln -fs /usr/share/ca-certificates/117812.pem /etc/ssl/certs/117812.pem"
	I0908 17:35:44.480735   43239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117812.pem
	I0908 17:35:44.486222   43239 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 16:46 /usr/share/ca-certificates/117812.pem
	I0908 17:35:44.486305   43239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117812.pem
	I0908 17:35:44.493573   43239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117812.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 17:35:44.507052   43239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 17:35:44.520610   43239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 17:35:44.526043   43239 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 17:35:44.526116   43239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 17:35:44.533831   43239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 17:35:44.548101   43239 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 17:35:44.553638   43239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 17:35:44.561301   43239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 17:35:44.568794   43239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 17:35:44.576818   43239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 17:35:44.584513   43239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 17:35:44.592364   43239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 17:35:44.599817   43239 kubeadm.go:392] StartCluster: {Name:test-preload-547294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-547294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 17:35:44.599898   43239 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 17:35:44.599941   43239 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 17:35:44.639235   43239 cri.go:89] found id: ""
	I0908 17:35:44.639308   43239 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 17:35:44.652270   43239 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 17:35:44.652293   43239 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 17:35:44.652343   43239 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 17:35:44.664771   43239 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 17:35:44.665150   43239 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-547294" does not appear in /home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 17:35:44.665240   43239 kubeconfig.go:62] /home/jenkins/minikube-integration/21504-7629/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-547294" cluster setting kubeconfig missing "test-preload-547294" context setting]
	I0908 17:35:44.665457   43239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/kubeconfig: {Name:mkb59774845ad4e65ea2ac11e21880c504ffe601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 17:35:44.665920   43239 kapi.go:59] client config for test-preload-547294: &rest.Config{Host:"https://192.168.39.30:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/client.crt", KeyFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/client.key", CAFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 17:35:44.666264   43239 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0908 17:35:44.666277   43239 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0908 17:35:44.666281   43239 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0908 17:35:44.666285   43239 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0908 17:35:44.666289   43239 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0908 17:35:44.666548   43239 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 17:35:44.678390   43239 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.30
	I0908 17:35:44.678422   43239 kubeadm.go:1152] stopping kube-system containers ...
	I0908 17:35:44.678432   43239 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0908 17:35:44.678489   43239 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 17:35:44.721885   43239 cri.go:89] found id: ""
	I0908 17:35:44.721963   43239 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0908 17:35:44.745150   43239 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 17:35:44.757308   43239 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 17:35:44.757329   43239 kubeadm.go:157] found existing configuration files:
	
	I0908 17:35:44.757382   43239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 17:35:44.768814   43239 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 17:35:44.768889   43239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 17:35:44.780707   43239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 17:35:44.791781   43239 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 17:35:44.791852   43239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 17:35:44.804085   43239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 17:35:44.815195   43239 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 17:35:44.815274   43239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 17:35:44.827231   43239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 17:35:44.838032   43239 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 17:35:44.838097   43239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 17:35:44.849836   43239 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 17:35:44.861767   43239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:35:44.920308   43239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:35:46.005344   43239 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.08499176s)
	I0908 17:35:46.005386   43239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:35:46.243361   43239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:35:46.308109   43239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:35:46.386387   43239 api_server.go:52] waiting for apiserver process to appear ...
	I0908 17:35:46.386480   43239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:35:46.886790   43239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:35:47.386531   43239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:35:47.887543   43239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:35:48.387059   43239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:35:48.886797   43239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:35:48.924332   43239 api_server.go:72] duration metric: took 2.537942891s to wait for apiserver process to appear ...
	I0908 17:35:48.924368   43239 api_server.go:88] waiting for apiserver healthz status ...
	I0908 17:35:48.924392   43239 api_server.go:253] Checking apiserver healthz at https://192.168.39.30:8443/healthz ...
	I0908 17:35:48.924892   43239 api_server.go:269] stopped: https://192.168.39.30:8443/healthz: Get "https://192.168.39.30:8443/healthz": dial tcp 192.168.39.30:8443: connect: connection refused
	I0908 17:35:49.425085   43239 api_server.go:253] Checking apiserver healthz at https://192.168.39.30:8443/healthz ...
	I0908 17:35:51.733201   43239 api_server.go:279] https://192.168.39.30:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 17:35:51.733236   43239 api_server.go:103] status: https://192.168.39.30:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 17:35:51.733261   43239 api_server.go:253] Checking apiserver healthz at https://192.168.39.30:8443/healthz ...
	I0908 17:35:51.780016   43239 api_server.go:279] https://192.168.39.30:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 17:35:51.780046   43239 api_server.go:103] status: https://192.168.39.30:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 17:35:51.925516   43239 api_server.go:253] Checking apiserver healthz at https://192.168.39.30:8443/healthz ...
	I0908 17:35:51.932150   43239 api_server.go:279] https://192.168.39.30:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 17:35:51.932173   43239 api_server.go:103] status: https://192.168.39.30:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 17:35:52.424455   43239 api_server.go:253] Checking apiserver healthz at https://192.168.39.30:8443/healthz ...
	I0908 17:35:52.431728   43239 api_server.go:279] https://192.168.39.30:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 17:35:52.431757   43239 api_server.go:103] status: https://192.168.39.30:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 17:35:52.925153   43239 api_server.go:253] Checking apiserver healthz at https://192.168.39.30:8443/healthz ...
	I0908 17:35:52.932380   43239 api_server.go:279] https://192.168.39.30:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 17:35:52.932420   43239 api_server.go:103] status: https://192.168.39.30:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 17:35:53.425414   43239 api_server.go:253] Checking apiserver healthz at https://192.168.39.30:8443/healthz ...
	I0908 17:35:53.430142   43239 api_server.go:279] https://192.168.39.30:8443/healthz returned 200:
	ok
	I0908 17:35:53.436983   43239 api_server.go:141] control plane version: v1.32.0
	I0908 17:35:53.437011   43239 api_server.go:131] duration metric: took 4.512635992s to wait for apiserver health ...
	I0908 17:35:53.437019   43239 cni.go:84] Creating CNI manager for ""
	I0908 17:35:53.437025   43239 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 17:35:53.438699   43239 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 17:35:53.439878   43239 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 17:35:53.464078   43239 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 17:35:53.520167   43239 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 17:35:53.528450   43239 system_pods.go:59] 7 kube-system pods found
	I0908 17:35:53.528505   43239 system_pods.go:61] "coredns-668d6bf9bc-lfvrk" [e194988c-f858-49e2-80a3-6dc7a273e8c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 17:35:53.528518   43239 system_pods.go:61] "etcd-test-preload-547294" [9ec20b0c-2972-42d0-bd08-8411751180ff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 17:35:53.528528   43239 system_pods.go:61] "kube-apiserver-test-preload-547294" [8c5ee697-32af-4fba-af86-445f50d99c85] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 17:35:53.528533   43239 system_pods.go:61] "kube-controller-manager-test-preload-547294" [ebbe7470-28c1-41b6-a9b1-b0378bdf2705] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 17:35:53.528539   43239 system_pods.go:61] "kube-proxy-vfq4j" [f8306525-7f0f-44ac-8272-389021c6b50f] Running
	I0908 17:35:53.528548   43239 system_pods.go:61] "kube-scheduler-test-preload-547294" [6b2f7558-5b7b-4d0d-9dd3-f835daf08765] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 17:35:53.528560   43239 system_pods.go:61] "storage-provisioner" [9d71542a-00ba-4448-a3af-3f4522d17b77] Running
	I0908 17:35:53.528568   43239 system_pods.go:74] duration metric: took 8.376498ms to wait for pod list to return data ...
	I0908 17:35:53.528580   43239 node_conditions.go:102] verifying NodePressure condition ...
	I0908 17:35:53.534057   43239 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 17:35:53.534084   43239 node_conditions.go:123] node cpu capacity is 2
	I0908 17:35:53.534094   43239 node_conditions.go:105] duration metric: took 5.506743ms to run NodePressure ...
	I0908 17:35:53.534117   43239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:35:53.797712   43239 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0908 17:35:53.801893   43239 kubeadm.go:735] kubelet initialised
	I0908 17:35:53.801919   43239 kubeadm.go:736] duration metric: took 4.175931ms waiting for restarted kubelet to initialise ...
	I0908 17:35:53.801934   43239 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 17:35:53.822977   43239 ops.go:34] apiserver oom_adj: -16
	I0908 17:35:53.822998   43239 kubeadm.go:593] duration metric: took 9.170699622s to restartPrimaryControlPlane
	I0908 17:35:53.823006   43239 kubeadm.go:394] duration metric: took 9.223197527s to StartCluster
	I0908 17:35:53.823038   43239 settings.go:142] acquiring lock: {Name:mk1c22e0fe8486f74cbd8991c9b3bb6f4c36c978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 17:35:53.823105   43239 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 17:35:53.823625   43239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/kubeconfig: {Name:mkb59774845ad4e65ea2ac11e21880c504ffe601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 17:35:53.823834   43239 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 17:35:53.823964   43239 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 17:35:53.824019   43239 config.go:182] Loaded profile config "test-preload-547294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0908 17:35:53.824057   43239 addons.go:69] Setting storage-provisioner=true in profile "test-preload-547294"
	I0908 17:35:53.824078   43239 addons.go:238] Setting addon storage-provisioner=true in "test-preload-547294"
	W0908 17:35:53.824090   43239 addons.go:247] addon storage-provisioner should already be in state true
	I0908 17:35:53.824091   43239 addons.go:69] Setting default-storageclass=true in profile "test-preload-547294"
	I0908 17:35:53.824112   43239 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-547294"
	I0908 17:35:53.824121   43239 host.go:66] Checking if "test-preload-547294" exists ...
	I0908 17:35:53.824440   43239 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:35:53.824482   43239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:35:53.824549   43239 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:35:53.824593   43239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:35:53.825491   43239 out.go:179] * Verifying Kubernetes components...
	I0908 17:35:53.827022   43239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 17:35:53.840185   43239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36755
	I0908 17:35:53.840200   43239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41925
	I0908 17:35:53.840656   43239 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:35:53.840758   43239 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:35:53.841098   43239 main.go:141] libmachine: Using API Version  1
	I0908 17:35:53.841125   43239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:35:53.841391   43239 main.go:141] libmachine: Using API Version  1
	I0908 17:35:53.841422   43239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:35:53.841441   43239 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:35:53.841771   43239 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:35:53.841972   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetState
	I0908 17:35:53.842047   43239 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:35:53.842094   43239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:35:53.844416   43239 kapi.go:59] client config for test-preload-547294: &rest.Config{Host:"https://192.168.39.30:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/client.crt", KeyFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/client.key", CAFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 17:35:53.844762   43239 addons.go:238] Setting addon default-storageclass=true in "test-preload-547294"
	W0908 17:35:53.844783   43239 addons.go:247] addon default-storageclass should already be in state true
	I0908 17:35:53.844812   43239 host.go:66] Checking if "test-preload-547294" exists ...
	I0908 17:35:53.845178   43239 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:35:53.845226   43239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:35:53.858606   43239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38963
	I0908 17:35:53.859193   43239 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:35:53.859706   43239 main.go:141] libmachine: Using API Version  1
	I0908 17:35:53.859731   43239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:35:53.859845   43239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33035
	I0908 17:35:53.860145   43239 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:35:53.860294   43239 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:35:53.860356   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetState
	I0908 17:35:53.860739   43239 main.go:141] libmachine: Using API Version  1
	I0908 17:35:53.860760   43239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:35:53.861071   43239 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:35:53.861732   43239 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:35:53.861780   43239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:35:53.862286   43239 main.go:141] libmachine: (test-preload-547294) Calling .DriverName
	I0908 17:35:53.864202   43239 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 17:35:53.865606   43239 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 17:35:53.865628   43239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 17:35:53.865650   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHHostname
	I0908 17:35:53.869066   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:53.869569   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:53.869601   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:53.869782   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHPort
	I0908 17:35:53.869930   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:53.870095   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHUsername
	I0908 17:35:53.870261   43239 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/test-preload-547294/id_rsa Username:docker}
	I0908 17:35:53.877596   43239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36393
	I0908 17:35:53.877984   43239 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:35:53.878382   43239 main.go:141] libmachine: Using API Version  1
	I0908 17:35:53.878401   43239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:35:53.878720   43239 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:35:53.878919   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetState
	I0908 17:35:53.880432   43239 main.go:141] libmachine: (test-preload-547294) Calling .DriverName
	I0908 17:35:53.880600   43239 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 17:35:53.880615   43239 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 17:35:53.880634   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHHostname
	I0908 17:35:53.883816   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:53.884341   43239 main.go:141] libmachine: (test-preload-547294) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:b9:d2", ip: ""} in network mk-test-preload-547294: {Iface:virbr1 ExpiryTime:2025-09-08 18:35:31 +0000 UTC Type:0 Mac:52:54:00:05:b9:d2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:test-preload-547294 Clientid:01:52:54:00:05:b9:d2}
	I0908 17:35:53.884361   43239 main.go:141] libmachine: (test-preload-547294) DBG | domain test-preload-547294 has defined IP address 192.168.39.30 and MAC address 52:54:00:05:b9:d2 in network mk-test-preload-547294
	I0908 17:35:53.884549   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHPort
	I0908 17:35:53.884713   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHKeyPath
	I0908 17:35:53.884867   43239 main.go:141] libmachine: (test-preload-547294) Calling .GetSSHUsername
	I0908 17:35:53.884976   43239 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/test-preload-547294/id_rsa Username:docker}
	I0908 17:35:54.074035   43239 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 17:35:54.093482   43239 node_ready.go:35] waiting up to 6m0s for node "test-preload-547294" to be "Ready" ...
	I0908 17:35:54.147217   43239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 17:35:54.251151   43239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 17:35:54.311473   43239 main.go:141] libmachine: Making call to close driver server
	I0908 17:35:54.311499   43239 main.go:141] libmachine: (test-preload-547294) Calling .Close
	I0908 17:35:54.311781   43239 main.go:141] libmachine: (test-preload-547294) DBG | Closing plugin on server side
	I0908 17:35:54.311798   43239 main.go:141] libmachine: Successfully made call to close driver server
	I0908 17:35:54.311815   43239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 17:35:54.311834   43239 main.go:141] libmachine: Making call to close driver server
	I0908 17:35:54.311848   43239 main.go:141] libmachine: (test-preload-547294) Calling .Close
	I0908 17:35:54.312106   43239 main.go:141] libmachine: Successfully made call to close driver server
	I0908 17:35:54.312125   43239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 17:35:54.312136   43239 main.go:141] libmachine: (test-preload-547294) DBG | Closing plugin on server side
	I0908 17:35:54.322102   43239 main.go:141] libmachine: Making call to close driver server
	I0908 17:35:54.322119   43239 main.go:141] libmachine: (test-preload-547294) Calling .Close
	I0908 17:35:54.322400   43239 main.go:141] libmachine: Successfully made call to close driver server
	I0908 17:35:54.322421   43239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 17:35:54.322445   43239 main.go:141] libmachine: (test-preload-547294) DBG | Closing plugin on server side
	I0908 17:35:54.882841   43239 main.go:141] libmachine: Making call to close driver server
	I0908 17:35:54.882876   43239 main.go:141] libmachine: (test-preload-547294) Calling .Close
	I0908 17:35:54.883163   43239 main.go:141] libmachine: Successfully made call to close driver server
	I0908 17:35:54.883181   43239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 17:35:54.883195   43239 main.go:141] libmachine: (test-preload-547294) DBG | Closing plugin on server side
	I0908 17:35:54.883283   43239 main.go:141] libmachine: Making call to close driver server
	I0908 17:35:54.883306   43239 main.go:141] libmachine: (test-preload-547294) Calling .Close
	I0908 17:35:54.883655   43239 main.go:141] libmachine: Successfully made call to close driver server
	I0908 17:35:54.883674   43239 main.go:141] libmachine: Making call to close connection to plugin binary
	I0908 17:35:54.883681   43239 main.go:141] libmachine: (test-preload-547294) DBG | Closing plugin on server side
	I0908 17:35:54.885460   43239 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0908 17:35:54.886527   43239 addons.go:514] duration metric: took 1.062573227s for enable addons: enabled=[default-storageclass storage-provisioner]
	W0908 17:35:56.097449   43239 node_ready.go:57] node "test-preload-547294" has "Ready":"False" status (will retry)
	W0908 17:35:58.596413   43239 node_ready.go:57] node "test-preload-547294" has "Ready":"False" status (will retry)
	W0908 17:36:00.600118   43239 node_ready.go:57] node "test-preload-547294" has "Ready":"False" status (will retry)
	I0908 17:36:02.597953   43239 node_ready.go:49] node "test-preload-547294" is "Ready"
	I0908 17:36:02.597991   43239 node_ready.go:38] duration metric: took 8.504450837s for node "test-preload-547294" to be "Ready" ...
	I0908 17:36:02.598010   43239 api_server.go:52] waiting for apiserver process to appear ...
	I0908 17:36:02.598072   43239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:36:02.618915   43239 api_server.go:72] duration metric: took 8.795038965s to wait for apiserver process to appear ...
	I0908 17:36:02.618947   43239 api_server.go:88] waiting for apiserver healthz status ...
	I0908 17:36:02.618962   43239 api_server.go:253] Checking apiserver healthz at https://192.168.39.30:8443/healthz ...
	I0908 17:36:02.625555   43239 api_server.go:279] https://192.168.39.30:8443/healthz returned 200:
	ok
	I0908 17:36:02.626394   43239 api_server.go:141] control plane version: v1.32.0
	I0908 17:36:02.626417   43239 api_server.go:131] duration metric: took 7.464152ms to wait for apiserver health ...
	I0908 17:36:02.626424   43239 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 17:36:02.631822   43239 system_pods.go:59] 7 kube-system pods found
	I0908 17:36:02.631848   43239 system_pods.go:61] "coredns-668d6bf9bc-lfvrk" [e194988c-f858-49e2-80a3-6dc7a273e8c7] Running
	I0908 17:36:02.631859   43239 system_pods.go:61] "etcd-test-preload-547294" [9ec20b0c-2972-42d0-bd08-8411751180ff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 17:36:02.631867   43239 system_pods.go:61] "kube-apiserver-test-preload-547294" [8c5ee697-32af-4fba-af86-445f50d99c85] Running
	I0908 17:36:02.631875   43239 system_pods.go:61] "kube-controller-manager-test-preload-547294" [ebbe7470-28c1-41b6-a9b1-b0378bdf2705] Running
	I0908 17:36:02.631880   43239 system_pods.go:61] "kube-proxy-vfq4j" [f8306525-7f0f-44ac-8272-389021c6b50f] Running
	I0908 17:36:02.631885   43239 system_pods.go:61] "kube-scheduler-test-preload-547294" [6b2f7558-5b7b-4d0d-9dd3-f835daf08765] Running
	I0908 17:36:02.631889   43239 system_pods.go:61] "storage-provisioner" [9d71542a-00ba-4448-a3af-3f4522d17b77] Running
	I0908 17:36:02.631896   43239 system_pods.go:74] duration metric: took 5.466487ms to wait for pod list to return data ...
	I0908 17:36:02.631905   43239 default_sa.go:34] waiting for default service account to be created ...
	I0908 17:36:02.635477   43239 default_sa.go:45] found service account: "default"
	I0908 17:36:02.635497   43239 default_sa.go:55] duration metric: took 3.5863ms for default service account to be created ...
	I0908 17:36:02.635506   43239 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 17:36:02.638387   43239 system_pods.go:86] 7 kube-system pods found
	I0908 17:36:02.638412   43239 system_pods.go:89] "coredns-668d6bf9bc-lfvrk" [e194988c-f858-49e2-80a3-6dc7a273e8c7] Running
	I0908 17:36:02.638423   43239 system_pods.go:89] "etcd-test-preload-547294" [9ec20b0c-2972-42d0-bd08-8411751180ff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 17:36:02.638431   43239 system_pods.go:89] "kube-apiserver-test-preload-547294" [8c5ee697-32af-4fba-af86-445f50d99c85] Running
	I0908 17:36:02.638437   43239 system_pods.go:89] "kube-controller-manager-test-preload-547294" [ebbe7470-28c1-41b6-a9b1-b0378bdf2705] Running
	I0908 17:36:02.638442   43239 system_pods.go:89] "kube-proxy-vfq4j" [f8306525-7f0f-44ac-8272-389021c6b50f] Running
	I0908 17:36:02.638446   43239 system_pods.go:89] "kube-scheduler-test-preload-547294" [6b2f7558-5b7b-4d0d-9dd3-f835daf08765] Running
	I0908 17:36:02.638451   43239 system_pods.go:89] "storage-provisioner" [9d71542a-00ba-4448-a3af-3f4522d17b77] Running
	I0908 17:36:02.638459   43239 system_pods.go:126] duration metric: took 2.945705ms to wait for k8s-apps to be running ...
	I0908 17:36:02.638468   43239 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 17:36:02.638521   43239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 17:36:02.655730   43239 system_svc.go:56] duration metric: took 17.254589ms WaitForService to wait for kubelet
	I0908 17:36:02.655757   43239 kubeadm.go:578] duration metric: took 8.831894279s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 17:36:02.655772   43239 node_conditions.go:102] verifying NodePressure condition ...
	I0908 17:36:02.658448   43239 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 17:36:02.658468   43239 node_conditions.go:123] node cpu capacity is 2
	I0908 17:36:02.658485   43239 node_conditions.go:105] duration metric: took 2.703676ms to run NodePressure ...
	I0908 17:36:02.658497   43239 start.go:241] waiting for startup goroutines ...
	I0908 17:36:02.658507   43239 start.go:246] waiting for cluster config update ...
	I0908 17:36:02.658519   43239 start.go:255] writing updated cluster config ...
	I0908 17:36:02.658808   43239 ssh_runner.go:195] Run: rm -f paused
	I0908 17:36:02.664089   43239 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 17:36:02.664554   43239 kapi.go:59] client config for test-preload-547294: &rest.Config{Host:"https://192.168.39.30:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/client.crt", KeyFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/profiles/test-preload-547294/client.key", CAFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 17:36:02.667974   43239 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-lfvrk" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:36:02.673047   43239 pod_ready.go:94] pod "coredns-668d6bf9bc-lfvrk" is "Ready"
	I0908 17:36:02.673070   43239 pod_ready.go:86] duration metric: took 5.076285ms for pod "coredns-668d6bf9bc-lfvrk" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:36:02.676318   43239 pod_ready.go:83] waiting for pod "etcd-test-preload-547294" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:36:03.682469   43239 pod_ready.go:94] pod "etcd-test-preload-547294" is "Ready"
	I0908 17:36:03.682495   43239 pod_ready.go:86] duration metric: took 1.006158923s for pod "etcd-test-preload-547294" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:36:03.685233   43239 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-547294" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:36:03.690411   43239 pod_ready.go:94] pod "kube-apiserver-test-preload-547294" is "Ready"
	I0908 17:36:03.690432   43239 pod_ready.go:86] duration metric: took 5.179533ms for pod "kube-apiserver-test-preload-547294" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:36:03.692730   43239 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-547294" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:36:03.869454   43239 pod_ready.go:94] pod "kube-controller-manager-test-preload-547294" is "Ready"
	I0908 17:36:03.869482   43239 pod_ready.go:86] duration metric: took 176.730778ms for pod "kube-controller-manager-test-preload-547294" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:36:04.069021   43239 pod_ready.go:83] waiting for pod "kube-proxy-vfq4j" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:36:04.469036   43239 pod_ready.go:94] pod "kube-proxy-vfq4j" is "Ready"
	I0908 17:36:04.469061   43239 pod_ready.go:86] duration metric: took 400.010012ms for pod "kube-proxy-vfq4j" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:36:04.668174   43239 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-547294" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:36:05.068096   43239 pod_ready.go:94] pod "kube-scheduler-test-preload-547294" is "Ready"
	I0908 17:36:05.068120   43239 pod_ready.go:86] duration metric: took 399.920705ms for pod "kube-scheduler-test-preload-547294" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:36:05.068130   43239 pod_ready.go:40] duration metric: took 2.40401755s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 17:36:05.109093   43239 start.go:617] kubectl: 1.33.2, cluster: 1.32.0 (minor skew: 1)
	I0908 17:36:05.110986   43239 out.go:179] * Done! kubectl is now configured to use "test-preload-547294" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.004400392Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757352966004381918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1d70352-9083-4273-8b82-cc50ba3658e4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.005515815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74c50da3-78e1-4538-b999-ed326367ed40 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.005717781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74c50da3-78e1-4538-b999-ed326367ed40 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.006083021Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f1bf4d7f6a25eb31ec214a738ad766febd4f814d773bced79f8aab8105dadbe,PodSandboxId:cefb84104b6efc664d7a5436a66e66f3b0e557c67239dbb3e18df1c169b9ed63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757352960381871122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lfvrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e194988c-f858-49e2-80a3-6dc7a273e8c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b3ca45d78ab390bbcd5e70707dfa40f1ecb338723b1c20781b597d3e285b36f,PodSandboxId:0ea4f9db6a22c185f220c21720d3831c5929e6223f9bc0fe2916671ca8fb92e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757352952826500344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfq4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f8306525-7f0f-44ac-8272-389021c6b50f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df21e6cada209ca750916b0f82ddc34fbfb7720c45077524a917fb6862a72d4,PodSandboxId:7ae56027c8d63d103925ecd716bd3482bb1ea4a7d1afe471dd292f3fba876995,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757352952820160117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
71542a-00ba-4448-a3af-3f4522d17b77,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be286e4118e11407ae184c45509c76dbca659ef9bdd08d280a7b1b2aa7b5e787,PodSandboxId:8f51435ad3b609fec0fecc2a6789294e93995c154313e92e5ff8de0623a1f9f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757352948535039007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28503e07e
94d99b024a3fcde1e6e0260,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c287fb0e60aff8a1605fa42e368a346f2434c55040cd3bcdde4e991f5a516d4,PodSandboxId:2c20afe9b2d348fd77176871b4e57f0c95fe8e2f7b3a066fb7d621d69cdde692,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757352948552136193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515ae0016b17cbf50cf42307c4f9227d,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d6b543ed056ed3bd9a3f0b1a05e01bd3b59d8611523545791680656a01960b,PodSandboxId:8c7d9d36cf9a1d7c6edd55ec731e7b34397040ce344e7ff1b69afc9a6660c1ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757352948506336466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ae00ea93b6e50c439ca156514ed891,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ea6b10352777e0b9a2ab951e9f2f37ea2f2d9e12d2e4a5d5356a48337b602f,PodSandboxId:d345ee5c8ddb954501e735ad21c26ee039de41b7fc805430124e1f1fa8d27eaf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757352948497141487,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec30dcee15b349c59a8f608413b3d88,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74c50da3-78e1-4538-b999-ed326367ed40 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.047472970Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81f5a723-0da4-4dfb-ad81-16845551d5f3 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.047566974Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81f5a723-0da4-4dfb-ad81-16845551d5f3 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.048742779Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a96c6bb-5bc2-4d35-b3ac-37c24e30dd93 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.049400950Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757352966049379895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a96c6bb-5bc2-4d35-b3ac-37c24e30dd93 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.050048190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e90d173-9ed5-4bed-8922-f109b7a5074c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.050202904Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e90d173-9ed5-4bed-8922-f109b7a5074c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.050402047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f1bf4d7f6a25eb31ec214a738ad766febd4f814d773bced79f8aab8105dadbe,PodSandboxId:cefb84104b6efc664d7a5436a66e66f3b0e557c67239dbb3e18df1c169b9ed63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757352960381871122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lfvrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e194988c-f858-49e2-80a3-6dc7a273e8c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b3ca45d78ab390bbcd5e70707dfa40f1ecb338723b1c20781b597d3e285b36f,PodSandboxId:0ea4f9db6a22c185f220c21720d3831c5929e6223f9bc0fe2916671ca8fb92e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757352952826500344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfq4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f8306525-7f0f-44ac-8272-389021c6b50f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df21e6cada209ca750916b0f82ddc34fbfb7720c45077524a917fb6862a72d4,PodSandboxId:7ae56027c8d63d103925ecd716bd3482bb1ea4a7d1afe471dd292f3fba876995,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757352952820160117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
71542a-00ba-4448-a3af-3f4522d17b77,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be286e4118e11407ae184c45509c76dbca659ef9bdd08d280a7b1b2aa7b5e787,PodSandboxId:8f51435ad3b609fec0fecc2a6789294e93995c154313e92e5ff8de0623a1f9f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757352948535039007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28503e07e
94d99b024a3fcde1e6e0260,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c287fb0e60aff8a1605fa42e368a346f2434c55040cd3bcdde4e991f5a516d4,PodSandboxId:2c20afe9b2d348fd77176871b4e57f0c95fe8e2f7b3a066fb7d621d69cdde692,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757352948552136193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515ae0016b17cbf50cf42307c4f9227d,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d6b543ed056ed3bd9a3f0b1a05e01bd3b59d8611523545791680656a01960b,PodSandboxId:8c7d9d36cf9a1d7c6edd55ec731e7b34397040ce344e7ff1b69afc9a6660c1ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757352948506336466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ae00ea93b6e50c439ca156514ed891,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ea6b10352777e0b9a2ab951e9f2f37ea2f2d9e12d2e4a5d5356a48337b602f,PodSandboxId:d345ee5c8ddb954501e735ad21c26ee039de41b7fc805430124e1f1fa8d27eaf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757352948497141487,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec30dcee15b349c59a8f608413b3d88,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e90d173-9ed5-4bed-8922-f109b7a5074c name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.091854946Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9166cba2-9da1-46ad-94d9-cefe16d2e765 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.091946868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9166cba2-9da1-46ad-94d9-cefe16d2e765 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.093555999Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47377411-e10f-4b71-9a3e-11566328e375 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.094195913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757352966094169937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47377411-e10f-4b71-9a3e-11566328e375 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.095173936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b71e582b-4025-4301-9c95-1caf5631aa71 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.095286953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b71e582b-4025-4301-9c95-1caf5631aa71 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.095684725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f1bf4d7f6a25eb31ec214a738ad766febd4f814d773bced79f8aab8105dadbe,PodSandboxId:cefb84104b6efc664d7a5436a66e66f3b0e557c67239dbb3e18df1c169b9ed63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757352960381871122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lfvrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e194988c-f858-49e2-80a3-6dc7a273e8c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b3ca45d78ab390bbcd5e70707dfa40f1ecb338723b1c20781b597d3e285b36f,PodSandboxId:0ea4f9db6a22c185f220c21720d3831c5929e6223f9bc0fe2916671ca8fb92e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757352952826500344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfq4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f8306525-7f0f-44ac-8272-389021c6b50f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df21e6cada209ca750916b0f82ddc34fbfb7720c45077524a917fb6862a72d4,PodSandboxId:7ae56027c8d63d103925ecd716bd3482bb1ea4a7d1afe471dd292f3fba876995,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757352952820160117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
71542a-00ba-4448-a3af-3f4522d17b77,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be286e4118e11407ae184c45509c76dbca659ef9bdd08d280a7b1b2aa7b5e787,PodSandboxId:8f51435ad3b609fec0fecc2a6789294e93995c154313e92e5ff8de0623a1f9f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757352948535039007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28503e07e
94d99b024a3fcde1e6e0260,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c287fb0e60aff8a1605fa42e368a346f2434c55040cd3bcdde4e991f5a516d4,PodSandboxId:2c20afe9b2d348fd77176871b4e57f0c95fe8e2f7b3a066fb7d621d69cdde692,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757352948552136193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515ae0016b17cbf50cf42307c4f9227d,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d6b543ed056ed3bd9a3f0b1a05e01bd3b59d8611523545791680656a01960b,PodSandboxId:8c7d9d36cf9a1d7c6edd55ec731e7b34397040ce344e7ff1b69afc9a6660c1ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757352948506336466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ae00ea93b6e50c439ca156514ed891,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ea6b10352777e0b9a2ab951e9f2f37ea2f2d9e12d2e4a5d5356a48337b602f,PodSandboxId:d345ee5c8ddb954501e735ad21c26ee039de41b7fc805430124e1f1fa8d27eaf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757352948497141487,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec30dcee15b349c59a8f608413b3d88,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b71e582b-4025-4301-9c95-1caf5631aa71 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.132510524Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea1fdee9-5917-41ac-8b86-9143ec165482 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.132949474Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea1fdee9-5917-41ac-8b86-9143ec165482 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.134265238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a69cad8-f65e-4e8b-ae99-39451ddb02e4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.134673638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757352966134651464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a69cad8-f65e-4e8b-ae99-39451ddb02e4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.135353490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c692a1ef-f53d-4823-828f-0241d27d7670 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.135451198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c692a1ef-f53d-4823-828f-0241d27d7670 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:36:06 test-preload-547294 crio[836]: time="2025-09-08 17:36:06.135630606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f1bf4d7f6a25eb31ec214a738ad766febd4f814d773bced79f8aab8105dadbe,PodSandboxId:cefb84104b6efc664d7a5436a66e66f3b0e557c67239dbb3e18df1c169b9ed63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757352960381871122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lfvrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e194988c-f858-49e2-80a3-6dc7a273e8c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b3ca45d78ab390bbcd5e70707dfa40f1ecb338723b1c20781b597d3e285b36f,PodSandboxId:0ea4f9db6a22c185f220c21720d3831c5929e6223f9bc0fe2916671ca8fb92e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757352952826500344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfq4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f8306525-7f0f-44ac-8272-389021c6b50f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6df21e6cada209ca750916b0f82ddc34fbfb7720c45077524a917fb6862a72d4,PodSandboxId:7ae56027c8d63d103925ecd716bd3482bb1ea4a7d1afe471dd292f3fba876995,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757352952820160117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
71542a-00ba-4448-a3af-3f4522d17b77,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be286e4118e11407ae184c45509c76dbca659ef9bdd08d280a7b1b2aa7b5e787,PodSandboxId:8f51435ad3b609fec0fecc2a6789294e93995c154313e92e5ff8de0623a1f9f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757352948535039007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28503e07e
94d99b024a3fcde1e6e0260,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c287fb0e60aff8a1605fa42e368a346f2434c55040cd3bcdde4e991f5a516d4,PodSandboxId:2c20afe9b2d348fd77176871b4e57f0c95fe8e2f7b3a066fb7d621d69cdde692,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757352948552136193,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515ae0016b17cbf50cf42307c4f9227d,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6d6b543ed056ed3bd9a3f0b1a05e01bd3b59d8611523545791680656a01960b,PodSandboxId:8c7d9d36cf9a1d7c6edd55ec731e7b34397040ce344e7ff1b69afc9a6660c1ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757352948506336466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ae00ea93b6e50c439ca156514ed891,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ea6b10352777e0b9a2ab951e9f2f37ea2f2d9e12d2e4a5d5356a48337b602f,PodSandboxId:d345ee5c8ddb954501e735ad21c26ee039de41b7fc805430124e1f1fa8d27eaf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757352948497141487,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-547294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec30dcee15b349c59a8f608413b3d88,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c692a1ef-f53d-4823-828f-0241d27d7670 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5f1bf4d7f6a25       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   5 seconds ago       Running             coredns                   1                   cefb84104b6ef       coredns-668d6bf9bc-lfvrk
	5b3ca45d78ab3       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   13 seconds ago      Running             kube-proxy                1                   0ea4f9db6a22c       kube-proxy-vfq4j
	6df21e6cada20       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   7ae56027c8d63       storage-provisioner
	8c287fb0e60af       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   17 seconds ago      Running             etcd                      1                   2c20afe9b2d34       etcd-test-preload-547294
	be286e4118e11       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   17 seconds ago      Running             kube-scheduler            1                   8f51435ad3b60       kube-scheduler-test-preload-547294
	c6d6b543ed056       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   17 seconds ago      Running             kube-controller-manager   1                   8c7d9d36cf9a1       kube-controller-manager-test-preload-547294
	60ea6b1035277       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   17 seconds ago      Running             kube-apiserver            1                   d345ee5c8ddb9       kube-apiserver-test-preload-547294
	
	
	==> coredns [5f1bf4d7f6a25eb31ec214a738ad766febd4f814d773bced79f8aab8105dadbe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58239 - 26258 "HINFO IN 5240847419847604514.1568554522418012449. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036463184s
	
	
	==> describe nodes <==
	Name:               test-preload-547294
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-547294
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6
	                    minikube.k8s.io/name=test-preload-547294
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T17_34_37_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 17:34:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-547294
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 17:36:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 17:36:02 +0000   Mon, 08 Sep 2025 17:34:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 17:36:02 +0000   Mon, 08 Sep 2025 17:34:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 17:36:02 +0000   Mon, 08 Sep 2025 17:34:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 17:36:02 +0000   Mon, 08 Sep 2025 17:36:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.30
	  Hostname:    test-preload-547294
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 a2c3a9e289fc4604944e9e4095f4bbe0
	  System UUID:                a2c3a9e2-89fc-4604-944e-9e4095f4bbe0
	  Boot ID:                    a93abb01-d9da-4f63-a411-78422e3beeea
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-lfvrk                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     85s
	  kube-system                 etcd-test-preload-547294                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         91s
	  kube-system                 kube-apiserver-test-preload-547294             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-test-preload-547294    200m (10%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-proxy-vfq4j                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-test-preload-547294             100m (5%)     0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 84s                kube-proxy       
	  Normal   Starting                 13s                kube-proxy       
	  Normal   NodeHasSufficientMemory  90s                kubelet          Node test-preload-547294 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    90s                kubelet          Node test-preload-547294 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     90s                kubelet          Node test-preload-547294 status is now: NodeHasSufficientPID
	  Normal   Starting                 90s                kubelet          Starting kubelet.
	  Normal   NodeReady                89s                kubelet          Node test-preload-547294 status is now: NodeReady
	  Normal   RegisteredNode           86s                node-controller  Node test-preload-547294 event: Registered Node test-preload-547294 in Controller
	  Normal   Starting                 20s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node test-preload-547294 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node test-preload-547294 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node test-preload-547294 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 15s                kubelet          Node test-preload-547294 has been rebooted, boot id: a93abb01-d9da-4f63-a411-78422e3beeea
	  Normal   RegisteredNode           12s                node-controller  Node test-preload-547294 event: Registered Node test-preload-547294 in Controller
	
	
	==> dmesg <==
	[Sep 8 17:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000060] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.009924] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.016660] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083837] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.093044] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.485309] kauditd_printk_skb: 177 callbacks suppressed
	[Sep 8 17:36] kauditd_printk_skb: 128 callbacks suppressed
	
	
	==> etcd [8c287fb0e60aff8a1605fa42e368a346f2434c55040cd3bcdde4e991f5a516d4] <==
	{"level":"info","ts":"2025-09-08T17:35:48.971852Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-08T17:35:48.972143Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"404c942cebf80710","initial-advertise-peer-urls":["https://192.168.39.30:2380"],"listen-peer-urls":["https://192.168.39.30:2380"],"advertise-client-urls":["https://192.168.39.30:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.30:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-08T17:35:48.972187Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-08T17:35:48.956285Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-09-08T17:35:48.956399Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-08T17:35:48.972264Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-08T17:35:48.972281Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-08T17:35:48.972376Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.30:2380"}
	{"level":"info","ts":"2025-09-08T17:35:48.972382Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.30:2380"}
	{"level":"info","ts":"2025-09-08T17:35:50.510926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-08T17:35:50.511041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-08T17:35:50.511077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 received MsgPreVoteResp from 404c942cebf80710 at term 2"}
	{"level":"info","ts":"2025-09-08T17:35:50.511091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 became candidate at term 3"}
	{"level":"info","ts":"2025-09-08T17:35:50.511096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 received MsgVoteResp from 404c942cebf80710 at term 3"}
	{"level":"info","ts":"2025-09-08T17:35:50.511104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 became leader at term 3"}
	{"level":"info","ts":"2025-09-08T17:35:50.511111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 404c942cebf80710 elected leader 404c942cebf80710 at term 3"}
	{"level":"info","ts":"2025-09-08T17:35:50.514003Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T17:35:50.513922Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"404c942cebf80710","local-member-attributes":"{Name:test-preload-547294 ClientURLs:[https://192.168.39.30:2379]}","request-path":"/0/members/404c942cebf80710/attributes","cluster-id":"ae8b7a508f3fd394","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-08T17:35:50.514941Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T17:35:50.515103Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-08T17:35:50.515154Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-08T17:35:50.515738Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-08T17:35:50.517107Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-08T17:35:50.515853Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-08T17:35:50.517755Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.30:2379"}
	
	
	==> kernel <==
	 17:36:06 up 0 min,  0 users,  load average: 1.20, 0.31, 0.10
	Linux test-preload-547294 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [60ea6b10352777e0b9a2ab951e9f2f37ea2f2d9e12d2e4a5d5356a48337b602f] <==
	I0908 17:35:51.762839       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0908 17:35:51.762914       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0908 17:35:51.785620       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0908 17:35:51.786624       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0908 17:35:51.786876       1 aggregator.go:171] initial CRD sync complete...
	I0908 17:35:51.787181       1 autoregister_controller.go:144] Starting autoregister controller
	I0908 17:35:51.787213       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0908 17:35:51.787398       1 cache.go:39] Caches are synced for autoregister controller
	I0908 17:35:51.824054       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0908 17:35:51.837342       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0908 17:35:51.837428       1 policy_source.go:240] refreshing policies
	I0908 17:35:51.850290       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0908 17:35:51.852053       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0908 17:35:51.854519       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0908 17:35:51.855241       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0908 17:35:51.858260       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0908 17:35:52.347510       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0908 17:35:52.663226       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0908 17:35:53.601216       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0908 17:35:53.635935       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0908 17:35:53.674026       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0908 17:35:53.681839       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0908 17:35:54.973927       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 17:35:55.322453       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0908 17:35:55.370840       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c6d6b543ed056ed3bd9a3f0b1a05e01bd3b59d8611523545791680656a01960b] <==
	I0908 17:35:54.970275       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0908 17:35:54.972286       1 shared_informer.go:320] Caches are synced for crt configmap
	I0908 17:35:54.974146       1 shared_informer.go:320] Caches are synced for stateful set
	I0908 17:35:54.974336       1 shared_informer.go:320] Caches are synced for garbage collector
	I0908 17:35:54.974350       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 17:35:54.974358       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 17:35:54.975082       1 shared_informer.go:320] Caches are synced for resource quota
	I0908 17:35:54.976199       1 shared_informer.go:320] Caches are synced for endpoint
	I0908 17:35:54.979668       1 shared_informer.go:320] Caches are synced for node
	I0908 17:35:54.979915       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0908 17:35:54.980161       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0908 17:35:54.980917       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0908 17:35:54.980933       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0908 17:35:54.981184       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-547294"
	I0908 17:35:54.991917       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-547294"
	I0908 17:35:55.006034       1 shared_informer.go:320] Caches are synced for resource quota
	I0908 17:35:55.009511       1 shared_informer.go:320] Caches are synced for garbage collector
	I0908 17:35:55.334348       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="365.354924ms"
	I0908 17:35:55.335079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="415.29µs"
	I0908 17:36:00.506217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.177µs"
	I0908 17:36:01.516846       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.542114ms"
	I0908 17:36:01.526171       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="167.701µs"
	I0908 17:36:02.204966       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-547294"
	I0908 17:36:02.215963       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-547294"
	I0908 17:36:04.970021       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5b3ca45d78ab390bbcd5e70707dfa40f1ecb338723b1c20781b597d3e285b36f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0908 17:35:53.152015       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0908 17:35:53.161476       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.30"]
	E0908 17:35:53.161570       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 17:35:53.200706       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0908 17:35:53.200909       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 17:35:53.200960       1 server_linux.go:170] "Using iptables Proxier"
	I0908 17:35:53.203987       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 17:35:53.204320       1 server.go:497] "Version info" version="v1.32.0"
	I0908 17:35:53.204349       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 17:35:53.205926       1 config.go:329] "Starting node config controller"
	I0908 17:35:53.207314       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0908 17:35:53.207899       1 config.go:199] "Starting service config controller"
	I0908 17:35:53.207937       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0908 17:35:53.208008       1 config.go:105] "Starting endpoint slice config controller"
	I0908 17:35:53.208025       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0908 17:35:53.307505       1 shared_informer.go:320] Caches are synced for node config
	I0908 17:35:53.308692       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0908 17:35:53.308713       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [be286e4118e11407ae184c45509c76dbca659ef9bdd08d280a7b1b2aa7b5e787] <==
	I0908 17:35:50.062964       1 serving.go:386] Generated self-signed cert in-memory
	I0908 17:35:51.799236       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I0908 17:35:51.799285       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 17:35:51.804562       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 17:35:51.804638       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0908 17:35:51.804705       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 17:35:51.804717       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0908 17:35:51.804731       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 17:35:51.804736       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0908 17:35:51.805372       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0908 17:35:51.805459       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 17:35:51.905805       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0908 17:35:51.905937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0908 17:35:51.906545       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Sep 08 17:35:51 test-preload-547294 kubelet[1156]: I0908 17:35:51.907306    1156 kubelet_node_status.go:125] "Node was previously registered" node="test-preload-547294"
	Sep 08 17:35:51 test-preload-547294 kubelet[1156]: I0908 17:35:51.907511    1156 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-547294"
	Sep 08 17:35:51 test-preload-547294 kubelet[1156]: I0908 17:35:51.907557    1156 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 08 17:35:51 test-preload-547294 kubelet[1156]: I0908 17:35:51.909020    1156 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 08 17:35:51 test-preload-547294 kubelet[1156]: I0908 17:35:51.910123    1156 setters.go:602] "Node became not ready" node="test-preload-547294" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-08T17:35:51Z","lastTransitionTime":"2025-09-08T17:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Sep 08 17:35:52 test-preload-547294 kubelet[1156]: I0908 17:35:52.298431    1156 apiserver.go:52] "Watching apiserver"
	Sep 08 17:35:52 test-preload-547294 kubelet[1156]: E0908 17:35:52.304288    1156 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-lfvrk" podUID="e194988c-f858-49e2-80a3-6dc7a273e8c7"
	Sep 08 17:35:52 test-preload-547294 kubelet[1156]: I0908 17:35:52.315304    1156 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Sep 08 17:35:52 test-preload-547294 kubelet[1156]: I0908 17:35:52.335204    1156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8306525-7f0f-44ac-8272-389021c6b50f-xtables-lock\") pod \"kube-proxy-vfq4j\" (UID: \"f8306525-7f0f-44ac-8272-389021c6b50f\") " pod="kube-system/kube-proxy-vfq4j"
	Sep 08 17:35:52 test-preload-547294 kubelet[1156]: I0908 17:35:52.335279    1156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8306525-7f0f-44ac-8272-389021c6b50f-lib-modules\") pod \"kube-proxy-vfq4j\" (UID: \"f8306525-7f0f-44ac-8272-389021c6b50f\") " pod="kube-system/kube-proxy-vfq4j"
	Sep 08 17:35:52 test-preload-547294 kubelet[1156]: I0908 17:35:52.335296    1156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9d71542a-00ba-4448-a3af-3f4522d17b77-tmp\") pod \"storage-provisioner\" (UID: \"9d71542a-00ba-4448-a3af-3f4522d17b77\") " pod="kube-system/storage-provisioner"
	Sep 08 17:35:52 test-preload-547294 kubelet[1156]: E0908 17:35:52.336497    1156 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 08 17:35:52 test-preload-547294 kubelet[1156]: E0908 17:35:52.336601    1156 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e194988c-f858-49e2-80a3-6dc7a273e8c7-config-volume podName:e194988c-f858-49e2-80a3-6dc7a273e8c7 nodeName:}" failed. No retries permitted until 2025-09-08 17:35:52.836580916 +0000 UTC m=+6.644619467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e194988c-f858-49e2-80a3-6dc7a273e8c7-config-volume") pod "coredns-668d6bf9bc-lfvrk" (UID: "e194988c-f858-49e2-80a3-6dc7a273e8c7") : object "kube-system"/"coredns" not registered
	Sep 08 17:35:52 test-preload-547294 kubelet[1156]: E0908 17:35:52.838964    1156 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 08 17:35:52 test-preload-547294 kubelet[1156]: E0908 17:35:52.839050    1156 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e194988c-f858-49e2-80a3-6dc7a273e8c7-config-volume podName:e194988c-f858-49e2-80a3-6dc7a273e8c7 nodeName:}" failed. No retries permitted until 2025-09-08 17:35:53.839034485 +0000 UTC m=+7.647073048 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e194988c-f858-49e2-80a3-6dc7a273e8c7-config-volume") pod "coredns-668d6bf9bc-lfvrk" (UID: "e194988c-f858-49e2-80a3-6dc7a273e8c7") : object "kube-system"/"coredns" not registered
	Sep 08 17:35:53 test-preload-547294 kubelet[1156]: E0908 17:35:53.848180    1156 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 08 17:35:53 test-preload-547294 kubelet[1156]: E0908 17:35:53.848259    1156 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e194988c-f858-49e2-80a3-6dc7a273e8c7-config-volume podName:e194988c-f858-49e2-80a3-6dc7a273e8c7 nodeName:}" failed. No retries permitted until 2025-09-08 17:35:55.848244753 +0000 UTC m=+9.656283306 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e194988c-f858-49e2-80a3-6dc7a273e8c7-config-volume") pod "coredns-668d6bf9bc-lfvrk" (UID: "e194988c-f858-49e2-80a3-6dc7a273e8c7") : object "kube-system"/"coredns" not registered
	Sep 08 17:35:54 test-preload-547294 kubelet[1156]: E0908 17:35:54.352164    1156 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-lfvrk" podUID="e194988c-f858-49e2-80a3-6dc7a273e8c7"
	Sep 08 17:35:55 test-preload-547294 kubelet[1156]: E0908 17:35:55.864188    1156 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 08 17:35:55 test-preload-547294 kubelet[1156]: E0908 17:35:55.864258    1156 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e194988c-f858-49e2-80a3-6dc7a273e8c7-config-volume podName:e194988c-f858-49e2-80a3-6dc7a273e8c7 nodeName:}" failed. No retries permitted until 2025-09-08 17:35:59.864240007 +0000 UTC m=+13.672278570 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e194988c-f858-49e2-80a3-6dc7a273e8c7-config-volume") pod "coredns-668d6bf9bc-lfvrk" (UID: "e194988c-f858-49e2-80a3-6dc7a273e8c7") : object "kube-system"/"coredns" not registered
	Sep 08 17:35:56 test-preload-547294 kubelet[1156]: E0908 17:35:56.352505    1156 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-lfvrk" podUID="e194988c-f858-49e2-80a3-6dc7a273e8c7"
	Sep 08 17:35:56 test-preload-547294 kubelet[1156]: E0908 17:35:56.369549    1156 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757352956368754866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 08 17:35:56 test-preload-547294 kubelet[1156]: E0908 17:35:56.369591    1156 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757352956368754866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 08 17:36:06 test-preload-547294 kubelet[1156]: E0908 17:36:06.374481    1156 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757352966374223197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 08 17:36:06 test-preload-547294 kubelet[1156]: E0908 17:36:06.374504    1156 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757352966374223197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6df21e6cada209ca750916b0f82ddc34fbfb7720c45077524a917fb6862a72d4] <==
	I0908 17:35:53.001440       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-547294 -n test-preload-547294
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-547294 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-547294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-547294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-547294: (1.120426366s)
--- FAIL: TestPreload (160.16s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (121.98s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-582402 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0908 17:44:48.238859   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:45:54.296961   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-582402 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m57.768972s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-582402] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-582402" primary control-plane node in "pause-582402" cluster
	* Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-582402" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 17:44:46.762924   52672 out.go:360] Setting OutFile to fd 1 ...
	I0908 17:44:46.763046   52672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:44:46.763055   52672 out.go:374] Setting ErrFile to fd 2...
	I0908 17:44:46.763060   52672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:44:46.763298   52672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
	I0908 17:44:46.763825   52672 out.go:368] Setting JSON to false
	I0908 17:44:46.764812   52672 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5230,"bootTime":1757348257,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 17:44:46.764867   52672 start.go:140] virtualization: kvm guest
	I0908 17:44:46.766812   52672 out.go:179] * [pause-582402] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 17:44:46.768198   52672 notify.go:220] Checking for updates...
	I0908 17:44:46.768276   52672 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 17:44:46.769402   52672 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 17:44:46.770694   52672 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 17:44:46.771984   52672 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 17:44:46.773298   52672 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 17:44:46.774624   52672 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 17:44:46.776181   52672 config.go:182] Loaded profile config "pause-582402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:44:46.776584   52672 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:44:46.776641   52672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:44:46.792205   52672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I0908 17:44:46.792743   52672 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:44:46.793310   52672 main.go:141] libmachine: Using API Version  1
	I0908 17:44:46.793335   52672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:44:46.793755   52672 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:44:46.793945   52672 main.go:141] libmachine: (pause-582402) Calling .DriverName
	I0908 17:44:46.794193   52672 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 17:44:46.794491   52672 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:44:46.794554   52672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:44:46.810067   52672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40927
	I0908 17:44:46.810443   52672 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:44:46.810873   52672 main.go:141] libmachine: Using API Version  1
	I0908 17:44:46.810906   52672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:44:46.811234   52672 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:44:46.811422   52672 main.go:141] libmachine: (pause-582402) Calling .DriverName
	I0908 17:44:46.845175   52672 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 17:44:46.846413   52672 start.go:304] selected driver: kvm2
	I0908 17:44:46.846427   52672 start.go:918] validating driver "kvm2" against &{Name:pause-582402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.0 ClusterName:pause-582402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 17:44:46.846555   52672 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 17:44:46.846926   52672 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 17:44:46.847001   52672 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21504-7629/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 17:44:46.862279   52672 install.go:137] /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 17:44:46.863237   52672 cni.go:84] Creating CNI manager for ""
	I0908 17:44:46.863303   52672 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 17:44:46.863382   52672 start.go:348] cluster config:
	{Name:pause-582402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-582402 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 17:44:46.863602   52672 iso.go:125] acquiring lock: {Name:mkaf49872b434993209a65bf0f93ea3e4c6d93b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 17:44:46.866308   52672 out.go:179] * Starting "pause-582402" primary control-plane node in "pause-582402" cluster
	I0908 17:44:46.867731   52672 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 17:44:46.867786   52672 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 17:44:46.867796   52672 cache.go:58] Caching tarball of preloaded images
	I0908 17:44:46.867871   52672 preload.go:172] Found /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 17:44:46.867886   52672 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 17:44:46.868031   52672 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/pause-582402/config.json ...
	I0908 17:44:46.868241   52672 start.go:360] acquireMachinesLock for pause-582402: {Name:mka7c3ca4a3e37e9483e7804183d91c6725d32e4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 17:45:48.916568   52672 start.go:364] duration metric: took 1m2.048301524s to acquireMachinesLock for "pause-582402"
	I0908 17:45:48.916619   52672 start.go:96] Skipping create...Using existing machine configuration
	I0908 17:45:48.916626   52672 fix.go:54] fixHost starting: 
	I0908 17:45:48.917108   52672 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:45:48.917176   52672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:45:48.939535   52672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33497
	I0908 17:45:48.940038   52672 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:45:48.940541   52672 main.go:141] libmachine: Using API Version  1
	I0908 17:45:48.940567   52672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:45:48.940938   52672 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:45:48.941161   52672 main.go:141] libmachine: (pause-582402) Calling .DriverName
	I0908 17:45:48.941307   52672 main.go:141] libmachine: (pause-582402) Calling .GetState
	I0908 17:45:48.943278   52672 fix.go:112] recreateIfNeeded on pause-582402: state=Running err=<nil>
	W0908 17:45:48.943302   52672 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 17:45:48.945553   52672 out.go:252] * Updating the running kvm2 "pause-582402" VM ...
	I0908 17:45:48.945581   52672 machine.go:93] provisionDockerMachine start ...
	I0908 17:45:48.945606   52672 main.go:141] libmachine: (pause-582402) Calling .DriverName
	I0908 17:45:48.945825   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHHostname
	I0908 17:45:48.948361   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:48.948864   52672 main.go:141] libmachine: (pause-582402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fe:9a", ip: ""} in network mk-pause-582402: {Iface:virbr1 ExpiryTime:2025-09-08 18:43:35 +0000 UTC Type:0 Mac:52:54:00:ec:fe:9a Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:pause-582402 Clientid:01:52:54:00:ec:fe:9a}
	I0908 17:45:48.948898   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined IP address 192.168.39.196 and MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:48.949013   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHPort
	I0908 17:45:48.949230   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHKeyPath
	I0908 17:45:48.949389   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHKeyPath
	I0908 17:45:48.949552   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHUsername
	I0908 17:45:48.949717   52672 main.go:141] libmachine: Using SSH client type: native
	I0908 17:45:48.950014   52672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0908 17:45:48.950030   52672 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 17:45:49.065599   52672 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-582402
	
	I0908 17:45:49.065634   52672 main.go:141] libmachine: (pause-582402) Calling .GetMachineName
	I0908 17:45:49.065873   52672 buildroot.go:166] provisioning hostname "pause-582402"
	I0908 17:45:49.065897   52672 main.go:141] libmachine: (pause-582402) Calling .GetMachineName
	I0908 17:45:49.066091   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHHostname
	I0908 17:45:49.069321   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:49.069725   52672 main.go:141] libmachine: (pause-582402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fe:9a", ip: ""} in network mk-pause-582402: {Iface:virbr1 ExpiryTime:2025-09-08 18:43:35 +0000 UTC Type:0 Mac:52:54:00:ec:fe:9a Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:pause-582402 Clientid:01:52:54:00:ec:fe:9a}
	I0908 17:45:49.069769   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined IP address 192.168.39.196 and MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:49.069971   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHPort
	I0908 17:45:49.070189   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHKeyPath
	I0908 17:45:49.070366   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHKeyPath
	I0908 17:45:49.070500   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHUsername
	I0908 17:45:49.070687   52672 main.go:141] libmachine: Using SSH client type: native
	I0908 17:45:49.070946   52672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0908 17:45:49.070962   52672 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-582402 && echo "pause-582402" | sudo tee /etc/hostname
	I0908 17:45:49.202277   52672 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-582402
	
	I0908 17:45:49.202310   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHHostname
	I0908 17:45:49.205034   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:49.205422   52672 main.go:141] libmachine: (pause-582402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fe:9a", ip: ""} in network mk-pause-582402: {Iface:virbr1 ExpiryTime:2025-09-08 18:43:35 +0000 UTC Type:0 Mac:52:54:00:ec:fe:9a Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:pause-582402 Clientid:01:52:54:00:ec:fe:9a}
	I0908 17:45:49.205484   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined IP address 192.168.39.196 and MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:49.205678   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHPort
	I0908 17:45:49.205883   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHKeyPath
	I0908 17:45:49.206028   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHKeyPath
	I0908 17:45:49.206217   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHUsername
	I0908 17:45:49.206457   52672 main.go:141] libmachine: Using SSH client type: native
	I0908 17:45:49.206764   52672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0908 17:45:49.206792   52672 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-582402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-582402/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-582402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 17:45:49.322844   52672 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 17:45:49.322877   52672 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21504-7629/.minikube CaCertPath:/home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21504-7629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21504-7629/.minikube}
	I0908 17:45:49.322894   52672 buildroot.go:174] setting up certificates
	I0908 17:45:49.322901   52672 provision.go:84] configureAuth start
	I0908 17:45:49.322909   52672 main.go:141] libmachine: (pause-582402) Calling .GetMachineName
	I0908 17:45:49.323186   52672 main.go:141] libmachine: (pause-582402) Calling .GetIP
	I0908 17:45:49.326046   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:49.326456   52672 main.go:141] libmachine: (pause-582402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fe:9a", ip: ""} in network mk-pause-582402: {Iface:virbr1 ExpiryTime:2025-09-08 18:43:35 +0000 UTC Type:0 Mac:52:54:00:ec:fe:9a Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:pause-582402 Clientid:01:52:54:00:ec:fe:9a}
	I0908 17:45:49.326483   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined IP address 192.168.39.196 and MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:49.326719   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHHostname
	I0908 17:45:49.329639   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:49.330108   52672 main.go:141] libmachine: (pause-582402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fe:9a", ip: ""} in network mk-pause-582402: {Iface:virbr1 ExpiryTime:2025-09-08 18:43:35 +0000 UTC Type:0 Mac:52:54:00:ec:fe:9a Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:pause-582402 Clientid:01:52:54:00:ec:fe:9a}
	I0908 17:45:49.330138   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined IP address 192.168.39.196 and MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:49.330303   52672 provision.go:143] copyHostCerts
	I0908 17:45:49.330368   52672 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-7629/.minikube/ca.pem, removing ...
	I0908 17:45:49.330387   52672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-7629/.minikube/ca.pem
	I0908 17:45:49.330461   52672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21504-7629/.minikube/ca.pem (1078 bytes)
	I0908 17:45:49.330578   52672 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-7629/.minikube/cert.pem, removing ...
	I0908 17:45:49.330590   52672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-7629/.minikube/cert.pem
	I0908 17:45:49.330618   52672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21504-7629/.minikube/cert.pem (1123 bytes)
	I0908 17:45:49.330719   52672 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-7629/.minikube/key.pem, removing ...
	I0908 17:45:49.330732   52672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-7629/.minikube/key.pem
	I0908 17:45:49.330761   52672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21504-7629/.minikube/key.pem (1679 bytes)
	I0908 17:45:49.330826   52672 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21504-7629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca-key.pem org=jenkins.pause-582402 san=[127.0.0.1 192.168.39.196 localhost minikube pause-582402]
	I0908 17:45:49.565217   52672 provision.go:177] copyRemoteCerts
	I0908 17:45:49.565311   52672 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 17:45:49.565346   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHHostname
	I0908 17:45:49.568366   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:49.568746   52672 main.go:141] libmachine: (pause-582402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fe:9a", ip: ""} in network mk-pause-582402: {Iface:virbr1 ExpiryTime:2025-09-08 18:43:35 +0000 UTC Type:0 Mac:52:54:00:ec:fe:9a Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:pause-582402 Clientid:01:52:54:00:ec:fe:9a}
	I0908 17:45:49.568775   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined IP address 192.168.39.196 and MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:49.569007   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHPort
	I0908 17:45:49.569210   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHKeyPath
	I0908 17:45:49.569398   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHUsername
	I0908 17:45:49.569562   52672 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/pause-582402/id_rsa Username:docker}
	I0908 17:45:49.655943   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 17:45:49.700298   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0908 17:45:49.741106   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 17:45:49.791587   52672 provision.go:87] duration metric: took 468.673846ms to configureAuth
	I0908 17:45:49.791620   52672 buildroot.go:189] setting minikube options for container-runtime
	I0908 17:45:49.791881   52672 config.go:182] Loaded profile config "pause-582402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:45:49.791959   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHHostname
	I0908 17:45:49.795208   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:49.795616   52672 main.go:141] libmachine: (pause-582402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fe:9a", ip: ""} in network mk-pause-582402: {Iface:virbr1 ExpiryTime:2025-09-08 18:43:35 +0000 UTC Type:0 Mac:52:54:00:ec:fe:9a Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:pause-582402 Clientid:01:52:54:00:ec:fe:9a}
	I0908 17:45:49.795648   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined IP address 192.168.39.196 and MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:49.795834   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHPort
	I0908 17:45:49.796054   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHKeyPath
	I0908 17:45:49.796239   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHKeyPath
	I0908 17:45:49.796426   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHUsername
	I0908 17:45:49.796614   52672 main.go:141] libmachine: Using SSH client type: native
	I0908 17:45:49.796892   52672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0908 17:45:49.796917   52672 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 17:45:55.456696   52672 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 17:45:55.456725   52672 machine.go:96] duration metric: took 6.511135459s to provisionDockerMachine
	I0908 17:45:55.456740   52672 start.go:293] postStartSetup for "pause-582402" (driver="kvm2")
	I0908 17:45:55.456753   52672 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 17:45:55.456777   52672 main.go:141] libmachine: (pause-582402) Calling .DriverName
	I0908 17:45:55.457152   52672 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 17:45:55.457178   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHHostname
	I0908 17:45:55.460540   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:55.460966   52672 main.go:141] libmachine: (pause-582402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fe:9a", ip: ""} in network mk-pause-582402: {Iface:virbr1 ExpiryTime:2025-09-08 18:43:35 +0000 UTC Type:0 Mac:52:54:00:ec:fe:9a Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:pause-582402 Clientid:01:52:54:00:ec:fe:9a}
	I0908 17:45:55.461002   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined IP address 192.168.39.196 and MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:55.461192   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHPort
	I0908 17:45:55.461387   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHKeyPath
	I0908 17:45:55.461604   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHUsername
	I0908 17:45:55.461802   52672 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/pause-582402/id_rsa Username:docker}
	I0908 17:45:55.554219   52672 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 17:45:55.560797   52672 info.go:137] Remote host: Buildroot 2025.02
	I0908 17:45:55.560830   52672 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-7629/.minikube/addons for local assets ...
	I0908 17:45:55.560927   52672 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-7629/.minikube/files for local assets ...
	I0908 17:45:55.561076   52672 filesync.go:149] local asset: /home/jenkins/minikube-integration/21504-7629/.minikube/files/etc/ssl/certs/117812.pem -> 117812.pem in /etc/ssl/certs
	I0908 17:45:55.561182   52672 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 17:45:55.574936   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/files/etc/ssl/certs/117812.pem --> /etc/ssl/certs/117812.pem (1708 bytes)
	I0908 17:45:55.612807   52672 start.go:296] duration metric: took 156.051406ms for postStartSetup
	I0908 17:45:55.612854   52672 fix.go:56] duration metric: took 6.696226732s for fixHost
	I0908 17:45:55.612879   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHHostname
	I0908 17:45:55.615591   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:55.615940   52672 main.go:141] libmachine: (pause-582402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fe:9a", ip: ""} in network mk-pause-582402: {Iface:virbr1 ExpiryTime:2025-09-08 18:43:35 +0000 UTC Type:0 Mac:52:54:00:ec:fe:9a Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:pause-582402 Clientid:01:52:54:00:ec:fe:9a}
	I0908 17:45:55.615979   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined IP address 192.168.39.196 and MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:55.616157   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHPort
	I0908 17:45:55.616406   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHKeyPath
	I0908 17:45:55.616547   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHKeyPath
	I0908 17:45:55.616684   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHUsername
	I0908 17:45:55.616838   52672 main.go:141] libmachine: Using SSH client type: native
	I0908 17:45:55.617030   52672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0908 17:45:55.617040   52672 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0908 17:45:55.734037   52672 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757353555.729416722
	
	I0908 17:45:55.734065   52672 fix.go:216] guest clock: 1757353555.729416722
	I0908 17:45:55.734075   52672 fix.go:229] Guest: 2025-09-08 17:45:55.729416722 +0000 UTC Remote: 2025-09-08 17:45:55.612859938 +0000 UTC m=+68.886329837 (delta=116.556784ms)
	I0908 17:45:55.734101   52672 fix.go:200] guest clock delta is within tolerance: 116.556784ms
	I0908 17:45:55.734108   52672 start.go:83] releasing machines lock for "pause-582402", held for 6.817511923s
	I0908 17:45:55.734131   52672 main.go:141] libmachine: (pause-582402) Calling .DriverName
	I0908 17:45:55.734458   52672 main.go:141] libmachine: (pause-582402) Calling .GetIP
	I0908 17:45:55.737979   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:55.738350   52672 main.go:141] libmachine: (pause-582402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fe:9a", ip: ""} in network mk-pause-582402: {Iface:virbr1 ExpiryTime:2025-09-08 18:43:35 +0000 UTC Type:0 Mac:52:54:00:ec:fe:9a Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:pause-582402 Clientid:01:52:54:00:ec:fe:9a}
	I0908 17:45:55.738378   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined IP address 192.168.39.196 and MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:55.738629   52672 main.go:141] libmachine: (pause-582402) Calling .DriverName
	I0908 17:45:55.739268   52672 main.go:141] libmachine: (pause-582402) Calling .DriverName
	I0908 17:45:55.739467   52672 main.go:141] libmachine: (pause-582402) Calling .DriverName
	I0908 17:45:55.739561   52672 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 17:45:55.739611   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHHostname
	I0908 17:45:55.739736   52672 ssh_runner.go:195] Run: cat /version.json
	I0908 17:45:55.739764   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHHostname
	I0908 17:45:55.742852   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:55.743303   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:55.743340   52672 main.go:141] libmachine: (pause-582402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fe:9a", ip: ""} in network mk-pause-582402: {Iface:virbr1 ExpiryTime:2025-09-08 18:43:35 +0000 UTC Type:0 Mac:52:54:00:ec:fe:9a Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:pause-582402 Clientid:01:52:54:00:ec:fe:9a}
	I0908 17:45:55.743360   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined IP address 192.168.39.196 and MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:55.743772   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHPort
	I0908 17:45:55.744002   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHKeyPath
	I0908 17:45:55.744183   52672 main.go:141] libmachine: (pause-582402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fe:9a", ip: ""} in network mk-pause-582402: {Iface:virbr1 ExpiryTime:2025-09-08 18:43:35 +0000 UTC Type:0 Mac:52:54:00:ec:fe:9a Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:pause-582402 Clientid:01:52:54:00:ec:fe:9a}
	I0908 17:45:55.744205   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined IP address 192.168.39.196 and MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:45:55.744239   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHUsername
	I0908 17:45:55.744427   52672 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/pause-582402/id_rsa Username:docker}
	I0908 17:45:55.744698   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHPort
	I0908 17:45:55.744846   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHKeyPath
	I0908 17:45:55.744968   52672 main.go:141] libmachine: (pause-582402) Calling .GetSSHUsername
	I0908 17:45:55.745107   52672 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/pause-582402/id_rsa Username:docker}
	I0908 17:45:55.830473   52672 ssh_runner.go:195] Run: systemctl --version
	I0908 17:45:55.864021   52672 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 17:45:56.020329   52672 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0908 17:45:56.030229   52672 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0908 17:45:56.030348   52672 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 17:45:56.042443   52672 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 17:45:56.042473   52672 start.go:495] detecting cgroup driver to use...
	I0908 17:45:56.042557   52672 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 17:45:56.064946   52672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 17:45:56.083645   52672 docker.go:218] disabling cri-docker service (if available) ...
	I0908 17:45:56.083709   52672 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 17:45:56.101214   52672 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 17:45:56.120059   52672 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 17:45:56.329259   52672 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 17:45:56.526544   52672 docker.go:234] disabling docker service ...
	I0908 17:45:56.526628   52672 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 17:45:56.564487   52672 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 17:45:56.583866   52672 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 17:45:56.802854   52672 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 17:45:57.002083   52672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 17:45:57.022294   52672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 17:45:57.047661   52672 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 17:45:57.047730   52672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 17:45:57.062452   52672 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 17:45:57.062552   52672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 17:45:57.077728   52672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 17:45:57.092390   52672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 17:45:57.108618   52672 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 17:45:57.127898   52672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 17:45:57.141908   52672 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 17:45:57.156725   52672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 17:45:57.169653   52672 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 17:45:57.180611   52672 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 17:45:57.193710   52672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 17:45:57.383861   52672 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 17:46:04.039112   52672 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.655213644s)
	I0908 17:46:04.039147   52672 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 17:46:04.039206   52672 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 17:46:04.047341   52672 start.go:563] Will wait 60s for crictl version
	I0908 17:46:04.047411   52672 ssh_runner.go:195] Run: which crictl
	I0908 17:46:04.052871   52672 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 17:46:04.097260   52672 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0908 17:46:04.097369   52672 ssh_runner.go:195] Run: crio --version
	I0908 17:46:04.129459   52672 ssh_runner.go:195] Run: crio --version
	I0908 17:46:04.165524   52672 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0908 17:46:04.166893   52672 main.go:141] libmachine: (pause-582402) Calling .GetIP
	I0908 17:46:04.170291   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:46:04.170729   52672 main.go:141] libmachine: (pause-582402) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:fe:9a", ip: ""} in network mk-pause-582402: {Iface:virbr1 ExpiryTime:2025-09-08 18:43:35 +0000 UTC Type:0 Mac:52:54:00:ec:fe:9a Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:pause-582402 Clientid:01:52:54:00:ec:fe:9a}
	I0908 17:46:04.170763   52672 main.go:141] libmachine: (pause-582402) DBG | domain pause-582402 has defined IP address 192.168.39.196 and MAC address 52:54:00:ec:fe:9a in network mk-pause-582402
	I0908 17:46:04.171020   52672 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0908 17:46:04.176546   52672 kubeadm.go:875] updating cluster {Name:pause-582402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:pause-582402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 17:46:04.176740   52672 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 17:46:04.176808   52672 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 17:46:04.235876   52672 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 17:46:04.235912   52672 crio.go:433] Images already preloaded, skipping extraction
	I0908 17:46:04.235972   52672 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 17:46:04.286057   52672 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 17:46:04.286087   52672 cache_images.go:85] Images are preloaded, skipping loading
	I0908 17:46:04.286096   52672 kubeadm.go:926] updating node { 192.168.39.196 8443 v1.34.0 crio true true} ...
	I0908 17:46:04.286216   52672 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-582402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-582402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 17:46:04.286303   52672 ssh_runner.go:195] Run: crio config
	I0908 17:46:04.353779   52672 cni.go:84] Creating CNI manager for ""
	I0908 17:46:04.353818   52672 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 17:46:04.353835   52672 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 17:46:04.353895   52672 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-582402 NodeName:pause-582402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 17:46:04.354216   52672 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-582402"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.196"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 17:46:04.354304   52672 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 17:46:04.367866   52672 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 17:46:04.367938   52672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 17:46:04.384964   52672 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0908 17:46:04.408787   52672 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 17:46:04.432298   52672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0908 17:46:04.458736   52672 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I0908 17:46:04.463977   52672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 17:46:04.635286   52672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 17:46:04.654887   52672 certs.go:68] Setting up /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/pause-582402 for IP: 192.168.39.196
	I0908 17:46:04.654914   52672 certs.go:194] generating shared ca certs ...
	I0908 17:46:04.654927   52672 certs.go:226] acquiring lock for ca certs: {Name:mk97fb352a8636fddbcae5a6f40efc0f573cd949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 17:46:04.655105   52672 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21504-7629/.minikube/ca.key
	I0908 17:46:04.655186   52672 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21504-7629/.minikube/proxy-client-ca.key
	I0908 17:46:04.655202   52672 certs.go:256] generating profile certs ...
	I0908 17:46:04.655318   52672 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/pause-582402/client.key
	I0908 17:46:04.655428   52672 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/pause-582402/apiserver.key.a5bf78b7
	I0908 17:46:04.655491   52672 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/pause-582402/proxy-client.key
	I0908 17:46:04.655641   52672 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/11781.pem (1338 bytes)
	W0908 17:46:04.655685   52672 certs.go:480] ignoring /home/jenkins/minikube-integration/21504-7629/.minikube/certs/11781_empty.pem, impossibly tiny 0 bytes
	I0908 17:46:04.655701   52672 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca-key.pem (1671 bytes)
	I0908 17:46:04.655735   52672 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem (1078 bytes)
	I0908 17:46:04.655769   52672 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/cert.pem (1123 bytes)
	I0908 17:46:04.655807   52672 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/certs/key.pem (1679 bytes)
	I0908 17:46:04.655899   52672 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-7629/.minikube/files/etc/ssl/certs/117812.pem (1708 bytes)
	I0908 17:46:04.656536   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 17:46:04.690312   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 17:46:04.731198   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 17:46:04.768750   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 17:46:04.814788   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/pause-582402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 17:46:04.939894   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/pause-582402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 17:46:04.991233   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/pause-582402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 17:46:05.085097   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/pause-582402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 17:46:05.205406   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/certs/11781.pem --> /usr/share/ca-certificates/11781.pem (1338 bytes)
	I0908 17:46:05.308353   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/files/etc/ssl/certs/117812.pem --> /usr/share/ca-certificates/117812.pem (1708 bytes)
	I0908 17:46:05.375695   52672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-7629/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 17:46:05.451473   52672 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 17:46:05.520406   52672 ssh_runner.go:195] Run: openssl version
	I0908 17:46:05.533821   52672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117812.pem && ln -fs /usr/share/ca-certificates/117812.pem /etc/ssl/certs/117812.pem"
	I0908 17:46:05.567147   52672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117812.pem
	I0908 17:46:05.577580   52672 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 16:46 /usr/share/ca-certificates/117812.pem
	I0908 17:46:05.577647   52672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117812.pem
	I0908 17:46:05.591427   52672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117812.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 17:46:05.612474   52672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 17:46:05.643784   52672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 17:46:05.658906   52672 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 16:37 /usr/share/ca-certificates/minikubeCA.pem
	I0908 17:46:05.658992   52672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 17:46:05.685294   52672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 17:46:05.712662   52672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11781.pem && ln -fs /usr/share/ca-certificates/11781.pem /etc/ssl/certs/11781.pem"
	I0908 17:46:05.747749   52672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11781.pem
	I0908 17:46:05.760762   52672 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 16:46 /usr/share/ca-certificates/11781.pem
	I0908 17:46:05.760843   52672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11781.pem
	I0908 17:46:05.781575   52672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11781.pem /etc/ssl/certs/51391683.0"
	I0908 17:46:05.826083   52672 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 17:46:05.868048   52672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 17:46:05.888994   52672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 17:46:05.902060   52672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 17:46:05.920831   52672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 17:46:05.946073   52672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 17:46:05.968162   52672 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 17:46:05.996366   52672 kubeadm.go:392] StartCluster: {Name:pause-582402 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Cl
usterName:pause-582402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 17:46:05.996478   52672 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 17:46:05.996558   52672 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 17:46:06.172127   52672 cri.go:89] found id: "c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd"
	I0908 17:46:06.172158   52672 cri.go:89] found id: "38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5130b38a54e8"
	I0908 17:46:06.172165   52672 cri.go:89] found id: "57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76"
	I0908 17:46:06.172170   52672 cri.go:89] found id: "c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251"
	I0908 17:46:06.172174   52672 cri.go:89] found id: "3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14"
	I0908 17:46:06.172180   52672 cri.go:89] found id: "6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567"
	I0908 17:46:06.172184   52672 cri.go:89] found id: "3a809949aa75c8b15560bcb94da3b52ee69714288ea99f94d520cd42109caa70"
	I0908 17:46:06.172189   52672 cri.go:89] found id: "87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d"
	I0908 17:46:06.172194   52672 cri.go:89] found id: "4230c39d270b7cfaea11cf53a694273e2366581fe2c9e43e766ef969ea14962a"
	I0908 17:46:06.172203   52672 cri.go:89] found id: "4fc28498bd5596d1c10917ff34757e7634be86e000adead59a5fa200e1f4f71a"
	I0908 17:46:06.172208   52672 cri.go:89] found id: "aecdc8609890df363dc852143482e4e4ad10e31c2f12fb490bf8dc1783d5bb65"
	I0908 17:46:06.172212   52672 cri.go:89] found id: ""
	I0908 17:46:06.172259   52672 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-582402 -n pause-582402
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-582402 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-582402 logs -n 25: (1.421802242s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-387181 sudo systemctl status kubelet --all --full --no-pager                                                                 │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo systemctl cat kubelet --no-pager                                                                                 │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo journalctl -xeu kubelet --all --full --no-pager                                                                  │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo cat /etc/kubernetes/kubelet.conf                                                                                 │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo cat /var/lib/kubelet/config.yaml                                                                                 │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo systemctl status docker --all --full --no-pager                                                                  │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │                     │
	│ ssh     │ -p auto-387181 sudo systemctl cat docker --no-pager                                                                                  │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo cat /etc/docker/daemon.json                                                                                      │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo docker system info                                                                                               │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │                     │
	│ ssh     │ -p auto-387181 sudo systemctl status cri-docker --all --full --no-pager                                                              │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │                     │
	│ ssh     │ -p auto-387181 sudo systemctl cat cri-docker --no-pager                                                                              │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                         │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │                     │
	│ ssh     │ -p auto-387181 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                   │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo cri-dockerd --version                                                                                            │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo systemctl status containerd --all --full --no-pager                                                              │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │                     │
	│ ssh     │ -p auto-387181 sudo systemctl cat containerd --no-pager                                                                              │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo cat /lib/systemd/system/containerd.service                                                                       │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo cat /etc/containerd/config.toml                                                                                  │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo containerd config dump                                                                                           │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo systemctl status crio --all --full --no-pager                                                                    │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo systemctl cat crio --no-pager                                                                                    │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                          │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo crio config                                                                                                      │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ delete  │ -p auto-387181                                                                                                                       │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ start   │ -p bridge-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio │ bridge-387181 │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 17:46:25
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 17:46:25.848328   54844 out.go:360] Setting OutFile to fd 1 ...
	I0908 17:46:25.848434   54844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:46:25.848440   54844 out.go:374] Setting ErrFile to fd 2...
	I0908 17:46:25.848447   54844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:46:25.848638   54844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
	I0908 17:46:25.849246   54844 out.go:368] Setting JSON to false
	I0908 17:46:25.850315   54844 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5329,"bootTime":1757348257,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 17:46:25.850400   54844 start.go:140] virtualization: kvm guest
	I0908 17:46:25.852421   54844 out.go:179] * [bridge-387181] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 17:46:25.853752   54844 notify.go:220] Checking for updates...
	I0908 17:46:25.853769   54844 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 17:46:25.855120   54844 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 17:46:25.856317   54844 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 17:46:25.857599   54844 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 17:46:25.858876   54844 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 17:46:25.860076   54844 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 17:46:25.861698   54844 config.go:182] Loaded profile config "enable-default-cni-387181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:46:25.861791   54844 config.go:182] Loaded profile config "flannel-387181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:46:25.861912   54844 config.go:182] Loaded profile config "pause-582402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:46:25.862014   54844 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 17:46:25.901591   54844 out.go:179] * Using the kvm2 driver based on user configuration
	W0908 17:46:20.940856   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:22.942786   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:25.439765   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	I0908 17:46:25.902778   54844 start.go:304] selected driver: kvm2
	I0908 17:46:25.902794   54844 start.go:918] validating driver "kvm2" against <nil>
	I0908 17:46:25.902819   54844 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 17:46:25.903511   54844 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 17:46:25.903589   54844 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21504-7629/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 17:46:25.920736   54844 install.go:137] /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 17:46:25.920798   54844 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 17:46:25.921083   54844 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 17:46:25.921118   54844 cni.go:84] Creating CNI manager for "bridge"
	I0908 17:46:25.921127   54844 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 17:46:25.921182   54844 start.go:348] cluster config:
	{Name:bridge-387181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-387181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0908 17:46:25.921297   54844 iso.go:125] acquiring lock: {Name:mkaf49872b434993209a65bf0f93ea3e4c6d93b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 17:46:25.923996   54844 out.go:179] * Starting "bridge-387181" primary control-plane node in "bridge-387181" cluster
	I0908 17:46:25.925289   54844 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 17:46:25.925343   54844 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 17:46:25.925357   54844 cache.go:58] Caching tarball of preloaded images
	I0908 17:46:25.925457   54844 preload.go:172] Found /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 17:46:25.925473   54844 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 17:46:25.925563   54844 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/config.json ...
	I0908 17:46:25.925581   54844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/config.json: {Name:mka1fee2bee3480332a585fe316a7f58fdee8bc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 17:46:25.925753   54844 start.go:360] acquireMachinesLock for bridge-387181: {Name:mka7c3ca4a3e37e9483e7804183d91c6725d32e4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 17:46:25.925789   54844 start.go:364] duration metric: took 19.22µs to acquireMachinesLock for "bridge-387181"
	I0908 17:46:25.925812   54844 start.go:93] Provisioning new machine with config: &{Name:bridge-387181 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:bridge-387181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 17:46:25.925880   54844 start.go:125] createHost starting for "" (driver="kvm2")
	W0908 17:46:24.210683   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	W0908 17:46:26.708701   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	I0908 17:46:25.928336   54844 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 17:46:25.928502   54844 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:46:25.928559   54844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:46:25.946805   54844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0908 17:46:25.947339   54844 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:46:25.947863   54844 main.go:141] libmachine: Using API Version  1
	I0908 17:46:25.947884   54844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:46:25.948261   54844 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:46:25.948478   54844 main.go:141] libmachine: (bridge-387181) Calling .GetMachineName
	I0908 17:46:25.948672   54844 main.go:141] libmachine: (bridge-387181) Calling .DriverName
	I0908 17:46:25.948846   54844 start.go:159] libmachine.API.Create for "bridge-387181" (driver="kvm2")
	I0908 17:46:25.948877   54844 client.go:168] LocalClient.Create starting
	I0908 17:46:25.948907   54844 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem
	I0908 17:46:25.948940   54844 main.go:141] libmachine: Decoding PEM data...
	I0908 17:46:25.948957   54844 main.go:141] libmachine: Parsing certificate...
	I0908 17:46:25.949020   54844 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21504-7629/.minikube/certs/cert.pem
	I0908 17:46:25.949044   54844 main.go:141] libmachine: Decoding PEM data...
	I0908 17:46:25.949057   54844 main.go:141] libmachine: Parsing certificate...
	I0908 17:46:25.949072   54844 main.go:141] libmachine: Running pre-create checks...
	I0908 17:46:25.949080   54844 main.go:141] libmachine: (bridge-387181) Calling .PreCreateCheck
	I0908 17:46:25.949364   54844 main.go:141] libmachine: (bridge-387181) Calling .GetConfigRaw
	I0908 17:46:25.949709   54844 main.go:141] libmachine: Creating machine...
	I0908 17:46:25.949728   54844 main.go:141] libmachine: (bridge-387181) Calling .Create
	I0908 17:46:25.949929   54844 main.go:141] libmachine: (bridge-387181) creating KVM machine...
	I0908 17:46:25.949945   54844 main.go:141] libmachine: (bridge-387181) creating network...
	I0908 17:46:25.951425   54844 main.go:141] libmachine: (bridge-387181) DBG | found existing default KVM network
	I0908 17:46:25.952310   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:25.952161   54867 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b2:27:32} reservation:<nil>}
	I0908 17:46:25.953385   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:25.953293   54867 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011ba50}
	I0908 17:46:25.953413   54844 main.go:141] libmachine: (bridge-387181) DBG | created network xml: 
	I0908 17:46:25.953426   54844 main.go:141] libmachine: (bridge-387181) DBG | <network>
	I0908 17:46:25.953433   54844 main.go:141] libmachine: (bridge-387181) DBG |   <name>mk-bridge-387181</name>
	I0908 17:46:25.953443   54844 main.go:141] libmachine: (bridge-387181) DBG |   <dns enable='no'/>
	I0908 17:46:25.953455   54844 main.go:141] libmachine: (bridge-387181) DBG |   
	I0908 17:46:25.953468   54844 main.go:141] libmachine: (bridge-387181) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0908 17:46:25.953480   54844 main.go:141] libmachine: (bridge-387181) DBG |     <dhcp>
	I0908 17:46:25.953500   54844 main.go:141] libmachine: (bridge-387181) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0908 17:46:25.953514   54844 main.go:141] libmachine: (bridge-387181) DBG |     </dhcp>
	I0908 17:46:25.953525   54844 main.go:141] libmachine: (bridge-387181) DBG |   </ip>
	I0908 17:46:25.953532   54844 main.go:141] libmachine: (bridge-387181) DBG |   
	I0908 17:46:25.953540   54844 main.go:141] libmachine: (bridge-387181) DBG | </network>
	I0908 17:46:25.953546   54844 main.go:141] libmachine: (bridge-387181) DBG | 
	I0908 17:46:25.959032   54844 main.go:141] libmachine: (bridge-387181) DBG | trying to create private KVM network mk-bridge-387181 192.168.50.0/24...
	I0908 17:46:26.035206   54844 main.go:141] libmachine: (bridge-387181) DBG | private KVM network mk-bridge-387181 192.168.50.0/24 created
	I0908 17:46:26.035277   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:26.035168   54867 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 17:46:26.035292   54844 main.go:141] libmachine: (bridge-387181) setting up store path in /home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181 ...
	I0908 17:46:26.035323   54844 main.go:141] libmachine: (bridge-387181) building disk image from file:///home/jenkins/minikube-integration/21504-7629/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0908 17:46:26.035342   54844 main.go:141] libmachine: (bridge-387181) Downloading /home/jenkins/minikube-integration/21504-7629/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21504-7629/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 17:46:26.320658   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:26.320517   54867 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181/id_rsa...
	I0908 17:46:26.527875   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:26.527712   54867 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181/bridge-387181.rawdisk...
	I0908 17:46:26.527911   54844 main.go:141] libmachine: (bridge-387181) DBG | Writing magic tar header
	I0908 17:46:26.527926   54844 main.go:141] libmachine: (bridge-387181) DBG | Writing SSH key tar header
	I0908 17:46:26.527938   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:26.527822   54867 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181 ...
	I0908 17:46:26.527950   54844 main.go:141] libmachine: (bridge-387181) setting executable bit set on /home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181 (perms=drwx------)
	I0908 17:46:26.527964   54844 main.go:141] libmachine: (bridge-387181) setting executable bit set on /home/jenkins/minikube-integration/21504-7629/.minikube/machines (perms=drwxr-xr-x)
	I0908 17:46:26.527975   54844 main.go:141] libmachine: (bridge-387181) setting executable bit set on /home/jenkins/minikube-integration/21504-7629/.minikube (perms=drwxr-xr-x)
	I0908 17:46:26.528005   54844 main.go:141] libmachine: (bridge-387181) setting executable bit set on /home/jenkins/minikube-integration/21504-7629 (perms=drwxrwxr-x)
	I0908 17:46:26.528021   54844 main.go:141] libmachine: (bridge-387181) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0908 17:46:26.528032   54844 main.go:141] libmachine: (bridge-387181) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181
	I0908 17:46:26.528048   54844 main.go:141] libmachine: (bridge-387181) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21504-7629/.minikube/machines
	I0908 17:46:26.528057   54844 main.go:141] libmachine: (bridge-387181) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 17:46:26.528068   54844 main.go:141] libmachine: (bridge-387181) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21504-7629
	I0908 17:46:26.528084   54844 main.go:141] libmachine: (bridge-387181) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0908 17:46:26.528096   54844 main.go:141] libmachine: (bridge-387181) DBG | checking permissions on dir: /home/jenkins
	I0908 17:46:26.528113   54844 main.go:141] libmachine: (bridge-387181) DBG | checking permissions on dir: /home
	I0908 17:46:26.528124   54844 main.go:141] libmachine: (bridge-387181) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0908 17:46:26.528134   54844 main.go:141] libmachine: (bridge-387181) DBG | skipping /home - not owner
	I0908 17:46:26.528151   54844 main.go:141] libmachine: (bridge-387181) creating domain...
	I0908 17:46:26.529360   54844 main.go:141] libmachine: (bridge-387181) define libvirt domain using xml: 
	I0908 17:46:26.529382   54844 main.go:141] libmachine: (bridge-387181) <domain type='kvm'>
	I0908 17:46:26.529389   54844 main.go:141] libmachine: (bridge-387181)   <name>bridge-387181</name>
	I0908 17:46:26.529398   54844 main.go:141] libmachine: (bridge-387181)   <memory unit='MiB'>3072</memory>
	I0908 17:46:26.529404   54844 main.go:141] libmachine: (bridge-387181)   <vcpu>2</vcpu>
	I0908 17:46:26.529409   54844 main.go:141] libmachine: (bridge-387181)   <features>
	I0908 17:46:26.529414   54844 main.go:141] libmachine: (bridge-387181)     <acpi/>
	I0908 17:46:26.529444   54844 main.go:141] libmachine: (bridge-387181)     <apic/>
	I0908 17:46:26.529455   54844 main.go:141] libmachine: (bridge-387181)     <pae/>
	I0908 17:46:26.529465   54844 main.go:141] libmachine: (bridge-387181)     
	I0908 17:46:26.529473   54844 main.go:141] libmachine: (bridge-387181)   </features>
	I0908 17:46:26.529485   54844 main.go:141] libmachine: (bridge-387181)   <cpu mode='host-passthrough'>
	I0908 17:46:26.529502   54844 main.go:141] libmachine: (bridge-387181)   
	I0908 17:46:26.529506   54844 main.go:141] libmachine: (bridge-387181)   </cpu>
	I0908 17:46:26.529510   54844 main.go:141] libmachine: (bridge-387181)   <os>
	I0908 17:46:26.529515   54844 main.go:141] libmachine: (bridge-387181)     <type>hvm</type>
	I0908 17:46:26.529521   54844 main.go:141] libmachine: (bridge-387181)     <boot dev='cdrom'/>
	I0908 17:46:26.529547   54844 main.go:141] libmachine: (bridge-387181)     <boot dev='hd'/>
	I0908 17:46:26.529566   54844 main.go:141] libmachine: (bridge-387181)     <bootmenu enable='no'/>
	I0908 17:46:26.529572   54844 main.go:141] libmachine: (bridge-387181)   </os>
	I0908 17:46:26.529576   54844 main.go:141] libmachine: (bridge-387181)   <devices>
	I0908 17:46:26.529584   54844 main.go:141] libmachine: (bridge-387181)     <disk type='file' device='cdrom'>
	I0908 17:46:26.529602   54844 main.go:141] libmachine: (bridge-387181)       <source file='/home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181/boot2docker.iso'/>
	I0908 17:46:26.529610   54844 main.go:141] libmachine: (bridge-387181)       <target dev='hdc' bus='scsi'/>
	I0908 17:46:26.529614   54844 main.go:141] libmachine: (bridge-387181)       <readonly/>
	I0908 17:46:26.529619   54844 main.go:141] libmachine: (bridge-387181)     </disk>
	I0908 17:46:26.529625   54844 main.go:141] libmachine: (bridge-387181)     <disk type='file' device='disk'>
	I0908 17:46:26.529632   54844 main.go:141] libmachine: (bridge-387181)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0908 17:46:26.529641   54844 main.go:141] libmachine: (bridge-387181)       <source file='/home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181/bridge-387181.rawdisk'/>
	I0908 17:46:26.529646   54844 main.go:141] libmachine: (bridge-387181)       <target dev='hda' bus='virtio'/>
	I0908 17:46:26.529657   54844 main.go:141] libmachine: (bridge-387181)     </disk>
	I0908 17:46:26.529665   54844 main.go:141] libmachine: (bridge-387181)     <interface type='network'>
	I0908 17:46:26.529672   54844 main.go:141] libmachine: (bridge-387181)       <source network='mk-bridge-387181'/>
	I0908 17:46:26.529680   54844 main.go:141] libmachine: (bridge-387181)       <model type='virtio'/>
	I0908 17:46:26.529691   54844 main.go:141] libmachine: (bridge-387181)     </interface>
	I0908 17:46:26.529696   54844 main.go:141] libmachine: (bridge-387181)     <interface type='network'>
	I0908 17:46:26.529703   54844 main.go:141] libmachine: (bridge-387181)       <source network='default'/>
	I0908 17:46:26.529708   54844 main.go:141] libmachine: (bridge-387181)       <model type='virtio'/>
	I0908 17:46:26.529714   54844 main.go:141] libmachine: (bridge-387181)     </interface>
	I0908 17:46:26.529719   54844 main.go:141] libmachine: (bridge-387181)     <serial type='pty'>
	I0908 17:46:26.529723   54844 main.go:141] libmachine: (bridge-387181)       <target port='0'/>
	I0908 17:46:26.529750   54844 main.go:141] libmachine: (bridge-387181)     </serial>
	I0908 17:46:26.529774   54844 main.go:141] libmachine: (bridge-387181)     <console type='pty'>
	I0908 17:46:26.529786   54844 main.go:141] libmachine: (bridge-387181)       <target type='serial' port='0'/>
	I0908 17:46:26.529796   54844 main.go:141] libmachine: (bridge-387181)     </console>
	I0908 17:46:26.529804   54844 main.go:141] libmachine: (bridge-387181)     <rng model='virtio'>
	I0908 17:46:26.529816   54844 main.go:141] libmachine: (bridge-387181)       <backend model='random'>/dev/random</backend>
	I0908 17:46:26.529827   54844 main.go:141] libmachine: (bridge-387181)     </rng>
	I0908 17:46:26.529834   54844 main.go:141] libmachine: (bridge-387181)     
	I0908 17:46:26.529845   54844 main.go:141] libmachine: (bridge-387181)     
	I0908 17:46:26.529853   54844 main.go:141] libmachine: (bridge-387181)   </devices>
	I0908 17:46:26.529858   54844 main.go:141] libmachine: (bridge-387181) </domain>
	I0908 17:46:26.529864   54844 main.go:141] libmachine: (bridge-387181) 
	I0908 17:46:26.534136   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:f7:12:48 in network default
	I0908 17:46:26.534730   54844 main.go:141] libmachine: (bridge-387181) starting domain...
	I0908 17:46:26.534758   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:26.534766   54844 main.go:141] libmachine: (bridge-387181) ensuring networks are active...
	I0908 17:46:26.535458   54844 main.go:141] libmachine: (bridge-387181) Ensuring network default is active
	I0908 17:46:26.535851   54844 main.go:141] libmachine: (bridge-387181) Ensuring network mk-bridge-387181 is active
	I0908 17:46:26.536392   54844 main.go:141] libmachine: (bridge-387181) getting domain XML...
	I0908 17:46:26.537239   54844 main.go:141] libmachine: (bridge-387181) creating domain...
	I0908 17:46:27.876600   54844 main.go:141] libmachine: (bridge-387181) waiting for IP...
	I0908 17:46:27.877456   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:27.877964   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:27.878040   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:27.877966   54867 retry.go:31] will retry after 259.198907ms: waiting for domain to come up
	I0908 17:46:28.138640   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:28.139234   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:28.139265   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:28.139207   54867 retry.go:31] will retry after 308.03493ms: waiting for domain to come up
	I0908 17:46:28.448874   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:28.449395   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:28.449462   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:28.449372   54867 retry.go:31] will retry after 483.610435ms: waiting for domain to come up
	I0908 17:46:28.934958   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:28.935681   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:28.935708   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:28.935645   54867 retry.go:31] will retry after 409.672152ms: waiting for domain to come up
	I0908 17:46:29.347313   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:29.347932   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:29.347982   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:29.347924   54867 retry.go:31] will retry after 645.671268ms: waiting for domain to come up
	I0908 17:46:29.995830   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:29.996398   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:29.996464   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:29.996376   54867 retry.go:31] will retry after 742.214804ms: waiting for domain to come up
	I0908 17:46:30.740021   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:30.740569   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:30.740602   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:30.740557   54867 retry.go:31] will retry after 1.104415458s: waiting for domain to come up
	W0908 17:46:27.439979   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:29.440873   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	I0908 17:46:27.242275   52672 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd 38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5130b38a54e8 57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76 c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251 3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14 6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567 3a809949aa75c8b15560bcb94da3b52ee69714288ea99f94d520cd42109caa70 87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d 4230c39d270b7cfaea11cf53a694273e2366581fe2c9e43e766ef969ea14962a 4fc28498bd5596d1c10917ff34757e7634be86e000adead59a5fa200e1f4f71a aecdc8609890df363dc852143482e4e4ad10e31c2f12fb490bf8dc1783d5bb65: (20.738033201s)
	W0908 17:46:27.242355   52672 kubeadm.go:640] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd 38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5130b38a54e8 57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76 c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251 3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14 6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567 3a809949aa75c8b15560bcb94da3b52ee69714288ea99f94d520cd42109caa70 87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d 4230c39d270b7cfaea11cf53a694273e2366581fe2c9e43e766ef969ea14962a 4fc28498bd5596d1c10917ff34757e7634be86e000adead59a5fa200e1f4f71a aecdc8609890df363dc852143482e4e4ad10e31c2f12fb490bf8dc1783d5bb65: Process exited with status 1
	stdout:
	c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd
	38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5130b38a54e8
	57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76
	c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251
	3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14
	6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567
	3a809949aa75c8b15560bcb94da3b52ee69714288ea99f94d520cd42109caa70
	
	stderr:
	E0908 17:46:27.234846    3313 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d\": container with ID starting with 87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d not found: ID does not exist" containerID="87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d"
	time="2025-09-08T17:46:27Z" level=fatal msg="stopping the container \"87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d\": rpc error: code = NotFound desc = could not find container \"87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d\": container with ID starting with 87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d not found: ID does not exist"
	I0908 17:46:27.242421   52672 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0908 17:46:27.288132   52672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 17:46:27.305174   52672 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Sep  8 17:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5638 Sep  8 17:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Sep  8 17:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5586 Sep  8 17:43 /etc/kubernetes/scheduler.conf
	
	I0908 17:46:27.305243   52672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 17:46:27.318443   52672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 17:46:27.330222   52672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 17:46:27.330296   52672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 17:46:27.344364   52672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 17:46:27.356811   52672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 17:46:27.356888   52672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 17:46:27.369803   52672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 17:46:27.383147   52672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 17:46:27.383206   52672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 17:46:27.397838   52672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 17:46:27.411850   52672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:46:27.475473   52672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:46:28.978315   52672 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.50279868s)
	I0908 17:46:28.978353   52672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:46:29.295854   52672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:46:29.376398   52672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:46:29.482101   52672 api_server.go:52] waiting for apiserver process to appear ...
	I0908 17:46:29.482182   52672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:46:29.982907   52672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:46:30.483250   52672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:46:30.519087   52672 api_server.go:72] duration metric: took 1.036983742s to wait for apiserver process to appear ...
	I0908 17:46:30.519172   52672 api_server.go:88] waiting for apiserver healthz status ...
	I0908 17:46:30.519202   52672 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	W0908 17:46:28.708888   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	W0908 17:46:30.710813   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	W0908 17:46:33.209675   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	I0908 17:46:32.419724   52672 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 17:46:32.419759   52672 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 17:46:32.419781   52672 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0908 17:46:32.450628   52672 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 17:46:32.450690   52672 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 17:46:32.520000   52672 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0908 17:46:32.534842   52672 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 17:46:32.534876   52672 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 17:46:33.019342   52672 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0908 17:46:33.025721   52672 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 17:46:33.025752   52672 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 17:46:33.519338   52672 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0908 17:46:33.528332   52672 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 17:46:33.528375   52672 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 17:46:34.019298   52672 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0908 17:46:34.029770   52672 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0908 17:46:34.042524   52672 api_server.go:141] control plane version: v1.34.0
	I0908 17:46:34.042571   52672 api_server.go:131] duration metric: took 3.523386145s to wait for apiserver health ...
	I0908 17:46:34.042583   52672 cni.go:84] Creating CNI manager for ""
	I0908 17:46:34.042593   52672 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 17:46:34.044031   52672 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 17:46:34.045182   52672 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 17:46:34.068261   52672 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 17:46:34.100708   52672 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 17:46:34.113806   52672 system_pods.go:59] 6 kube-system pods found
	I0908 17:46:34.113857   52672 system_pods.go:61] "coredns-66bc5c9577-c2tlk" [6737e7ba-9abe-4fb0-92b9-28b32bb89ce8] Running
	I0908 17:46:34.113868   52672 system_pods.go:61] "etcd-pause-582402" [0079a3c1-312e-4117-8543-ba500931a7ba] Running
	I0908 17:46:34.113883   52672 system_pods.go:61] "kube-apiserver-pause-582402" [2258819d-bf5c-41e4-9357-2bc8d7ee5bc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 17:46:34.113894   52672 system_pods.go:61] "kube-controller-manager-pause-582402" [86bc4537-33f8-47ff-9941-c6fee4f6560f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 17:46:34.113910   52672 system_pods.go:61] "kube-proxy-9ld9z" [ff614778-696c-4113-9693-970eea6f5d45] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 17:46:34.113921   52672 system_pods.go:61] "kube-scheduler-pause-582402" [30c4c4e9-a2c8-4489-8d3f-b804341119d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 17:46:34.113934   52672 system_pods.go:74] duration metric: took 13.198432ms to wait for pod list to return data ...
	I0908 17:46:34.113949   52672 node_conditions.go:102] verifying NodePressure condition ...
	I0908 17:46:34.121054   52672 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 17:46:34.121079   52672 node_conditions.go:123] node cpu capacity is 2
	I0908 17:46:34.121088   52672 node_conditions.go:105] duration metric: took 7.135229ms to run NodePressure ...
	I0908 17:46:34.121107   52672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:46:34.393982   52672 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0908 17:46:34.397790   52672 kubeadm.go:735] kubelet initialised
	I0908 17:46:34.397823   52672 kubeadm.go:736] duration metric: took 3.815267ms waiting for restarted kubelet to initialise ...
	I0908 17:46:34.397842   52672 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 17:46:34.413894   52672 ops.go:34] apiserver oom_adj: -16
	I0908 17:46:34.413917   52672 kubeadm.go:593] duration metric: took 28.078864169s to restartPrimaryControlPlane
	I0908 17:46:34.413931   52672 kubeadm.go:394] duration metric: took 28.417572615s to StartCluster
	I0908 17:46:34.413954   52672 settings.go:142] acquiring lock: {Name:mk1c22e0fe8486f74cbd8991c9b3bb6f4c36c978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 17:46:34.414039   52672 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 17:46:34.415193   52672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/kubeconfig: {Name:mkb59774845ad4e65ea2ac11e21880c504ffe601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 17:46:34.415499   52672 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 17:46:34.415586   52672 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 17:46:34.415791   52672 config.go:182] Loaded profile config "pause-582402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:46:34.417300   52672 out.go:179] * Enabled addons: 
	I0908 17:46:34.417311   52672 out.go:179] * Verifying Kubernetes components...
	I0908 17:46:31.846133   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:31.846626   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:31.846772   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:31.846701   54867 retry.go:31] will retry after 1.135481372s: waiting for domain to come up
	I0908 17:46:32.984196   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:32.984662   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:32.984691   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:32.984648   54867 retry.go:31] will retry after 1.28455646s: waiting for domain to come up
	I0908 17:46:34.271149   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:34.271667   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:34.271698   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:34.271640   54867 retry.go:31] will retry after 1.636931145s: waiting for domain to come up
	W0908 17:46:31.442530   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:33.940787   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	I0908 17:46:34.418353   52672 addons.go:514] duration metric: took 2.779085ms for enable addons: enabled=[]
	I0908 17:46:34.418377   52672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 17:46:34.680180   52672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 17:46:34.707862   52672 node_ready.go:35] waiting up to 6m0s for node "pause-582402" to be "Ready" ...
	I0908 17:46:34.714406   52672 node_ready.go:49] node "pause-582402" is "Ready"
	I0908 17:46:34.714439   52672 node_ready.go:38] duration metric: took 6.540889ms for node "pause-582402" to be "Ready" ...
	I0908 17:46:34.714455   52672 api_server.go:52] waiting for apiserver process to appear ...
	I0908 17:46:34.714513   52672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:46:34.741180   52672 api_server.go:72] duration metric: took 325.637518ms to wait for apiserver process to appear ...
	I0908 17:46:34.741214   52672 api_server.go:88] waiting for apiserver healthz status ...
	I0908 17:46:34.741268   52672 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0908 17:46:34.748497   52672 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0908 17:46:34.750002   52672 api_server.go:141] control plane version: v1.34.0
	I0908 17:46:34.750024   52672 api_server.go:131] duration metric: took 8.801955ms to wait for apiserver health ...
	I0908 17:46:34.750036   52672 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 17:46:34.753815   52672 system_pods.go:59] 6 kube-system pods found
	I0908 17:46:34.753847   52672 system_pods.go:61] "coredns-66bc5c9577-c2tlk" [6737e7ba-9abe-4fb0-92b9-28b32bb89ce8] Running
	I0908 17:46:34.753856   52672 system_pods.go:61] "etcd-pause-582402" [0079a3c1-312e-4117-8543-ba500931a7ba] Running
	I0908 17:46:34.753867   52672 system_pods.go:61] "kube-apiserver-pause-582402" [2258819d-bf5c-41e4-9357-2bc8d7ee5bc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 17:46:34.753877   52672 system_pods.go:61] "kube-controller-manager-pause-582402" [86bc4537-33f8-47ff-9941-c6fee4f6560f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 17:46:34.753888   52672 system_pods.go:61] "kube-proxy-9ld9z" [ff614778-696c-4113-9693-970eea6f5d45] Running
	I0908 17:46:34.753896   52672 system_pods.go:61] "kube-scheduler-pause-582402" [30c4c4e9-a2c8-4489-8d3f-b804341119d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 17:46:34.753905   52672 system_pods.go:74] duration metric: took 3.862953ms to wait for pod list to return data ...
	I0908 17:46:34.753917   52672 default_sa.go:34] waiting for default service account to be created ...
	I0908 17:46:34.756474   52672 default_sa.go:45] found service account: "default"
	I0908 17:46:34.756495   52672 default_sa.go:55] duration metric: took 2.571785ms for default service account to be created ...
	I0908 17:46:34.756505   52672 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 17:46:34.764959   52672 system_pods.go:86] 6 kube-system pods found
	I0908 17:46:34.764992   52672 system_pods.go:89] "coredns-66bc5c9577-c2tlk" [6737e7ba-9abe-4fb0-92b9-28b32bb89ce8] Running
	I0908 17:46:34.765000   52672 system_pods.go:89] "etcd-pause-582402" [0079a3c1-312e-4117-8543-ba500931a7ba] Running
	I0908 17:46:34.765011   52672 system_pods.go:89] "kube-apiserver-pause-582402" [2258819d-bf5c-41e4-9357-2bc8d7ee5bc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 17:46:34.765024   52672 system_pods.go:89] "kube-controller-manager-pause-582402" [86bc4537-33f8-47ff-9941-c6fee4f6560f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 17:46:34.765032   52672 system_pods.go:89] "kube-proxy-9ld9z" [ff614778-696c-4113-9693-970eea6f5d45] Running
	I0908 17:46:34.765040   52672 system_pods.go:89] "kube-scheduler-pause-582402" [30c4c4e9-a2c8-4489-8d3f-b804341119d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 17:46:34.765051   52672 system_pods.go:126] duration metric: took 8.537852ms to wait for k8s-apps to be running ...
	I0908 17:46:34.765064   52672 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 17:46:34.765116   52672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 17:46:34.786190   52672 system_svc.go:56] duration metric: took 21.115435ms WaitForService to wait for kubelet
	I0908 17:46:34.786234   52672 kubeadm.go:578] duration metric: took 370.704648ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 17:46:34.786254   52672 node_conditions.go:102] verifying NodePressure condition ...
	I0908 17:46:34.789187   52672 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 17:46:34.789211   52672 node_conditions.go:123] node cpu capacity is 2
	I0908 17:46:34.789225   52672 node_conditions.go:105] duration metric: took 2.965946ms to run NodePressure ...
	I0908 17:46:34.789238   52672 start.go:241] waiting for startup goroutines ...
	I0908 17:46:34.789246   52672 start.go:246] waiting for cluster config update ...
	I0908 17:46:34.789254   52672 start.go:255] writing updated cluster config ...
	I0908 17:46:34.789546   52672 ssh_runner.go:195] Run: rm -f paused
	I0908 17:46:34.796754   52672 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 17:46:34.797847   52672 kapi.go:59] client config for pause-582402: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/profiles/pause-582402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/profiles/pause-582402/client.key", CAFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 17:46:34.801843   52672 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c2tlk" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:34.807696   52672 pod_ready.go:94] pod "coredns-66bc5c9577-c2tlk" is "Ready"
	I0908 17:46:34.807727   52672 pod_ready.go:86] duration metric: took 5.848212ms for pod "coredns-66bc5c9577-c2tlk" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:34.810773   52672 pod_ready.go:83] waiting for pod "etcd-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:34.816627   52672 pod_ready.go:94] pod "etcd-pause-582402" is "Ready"
	I0908 17:46:34.816652   52672 pod_ready.go:86] duration metric: took 5.855848ms for pod "etcd-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:34.819999   52672 pod_ready.go:83] waiting for pod "kube-apiserver-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 17:46:35.210486   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	W0908 17:46:37.211215   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	I0908 17:46:35.910296   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:35.910955   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:35.910995   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:35.910935   54867 retry.go:31] will retry after 2.234938879s: waiting for domain to come up
	I0908 17:46:38.148158   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:38.148809   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:38.148858   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:38.148785   54867 retry.go:31] will retry after 2.887047844s: waiting for domain to come up
	W0908 17:46:35.941846   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:37.942331   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:40.439762   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:36.830288   52672 pod_ready.go:104] pod "kube-apiserver-pause-582402" is not "Ready", error: <nil>
	W0908 17:46:39.326394   52672 pod_ready.go:104] pod "kube-apiserver-pause-582402" is not "Ready", error: <nil>
	W0908 17:46:41.326905   52672 pod_ready.go:104] pod "kube-apiserver-pause-582402" is not "Ready", error: <nil>
	W0908 17:46:39.709672   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	W0908 17:46:42.209034   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	I0908 17:46:43.831976   52672 pod_ready.go:94] pod "kube-apiserver-pause-582402" is "Ready"
	I0908 17:46:43.832002   52672 pod_ready.go:86] duration metric: took 9.011976618s for pod "kube-apiserver-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:43.834242   52672 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:43.839567   52672 pod_ready.go:94] pod "kube-controller-manager-pause-582402" is "Ready"
	I0908 17:46:43.839593   52672 pod_ready.go:86] duration metric: took 5.326245ms for pod "kube-controller-manager-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:43.841513   52672 pod_ready.go:83] waiting for pod "kube-proxy-9ld9z" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:43.845460   52672 pod_ready.go:94] pod "kube-proxy-9ld9z" is "Ready"
	I0908 17:46:43.845488   52672 pod_ready.go:86] duration metric: took 3.956239ms for pod "kube-proxy-9ld9z" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:43.847410   52672 pod_ready.go:83] waiting for pod "kube-scheduler-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:44.424735   52672 pod_ready.go:94] pod "kube-scheduler-pause-582402" is "Ready"
	I0908 17:46:44.424762   52672 pod_ready.go:86] duration metric: took 577.331373ms for pod "kube-scheduler-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:44.424772   52672 pod_ready.go:40] duration metric: took 9.62797322s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 17:46:44.474082   52672 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 17:46:44.475728   52672 out.go:179] * Done! kubectl is now configured to use "pause-582402" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.182895515Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757353605182870462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8622e13b-dfe7-42ac-97f6-b385c5ca353a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.183625278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1edcbd6-903c-4786-afba-42f4893f3529 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.183690881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1edcbd6-903c-4786-afba-42f4893f3529 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.183942202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16e436b9bd7b21a9b36bd130cd8ea344cda06a19dda917a779a8163811c5366e,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757353593718394484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccda1b726e4d850914c07874d01efb6cd70d3958c4ff375fe9129434c7a904bf,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757353589963125191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194e1ca66696d558ac6b8630dab91ff0537adf6ad4e3c214089e239178fdf36b,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757353589925068783,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernet
es.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bf55517747fb3d1d169b67e626f80cf6961f1bd4cf8064c4e0f3caa6b4d55a,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757353589907027314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee46f499922fdb4631f75c5ceaed4cf569db29c8ef0ecb8253beb03d8eee294d,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757353576688092036,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327b7b6576be426c43e05ceebed39315dfe9c5369defa56c1aa708bf42535ac1,PodSandboxId:e48594ec30229fded335944f0cbf97b418ab1d38357d08776c4693d53741b34e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175735
3566748815783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f
90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757353565574491656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5
130b38a54e8,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757353565537943192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757353565514013473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757353565326963783,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757353565356091406,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567,PodSandboxId:88e98ad5b6363624d749ab496855580275c3720f64e800917730279dd62d6e51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757353449716393553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1edcbd6-903c-4786-afba-42f4893f3529 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.234451850Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad699d40-933d-4efb-bb98-1c57f9062bb0 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.234546758Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad699d40-933d-4efb-bb98-1c57f9062bb0 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.235924609Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30dc70a3-8bd9-42cc-8440-562e3eaf4edc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.236334664Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757353605236315903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30dc70a3-8bd9-42cc-8440-562e3eaf4edc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.236946221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=beffff9f-fc86-413d-8fdf-31789212d637 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.237383808Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=beffff9f-fc86-413d-8fdf-31789212d637 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.238091365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16e436b9bd7b21a9b36bd130cd8ea344cda06a19dda917a779a8163811c5366e,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757353593718394484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccda1b726e4d850914c07874d01efb6cd70d3958c4ff375fe9129434c7a904bf,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757353589963125191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194e1ca66696d558ac6b8630dab91ff0537adf6ad4e3c214089e239178fdf36b,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757353589925068783,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernet
es.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bf55517747fb3d1d169b67e626f80cf6961f1bd4cf8064c4e0f3caa6b4d55a,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757353589907027314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee46f499922fdb4631f75c5ceaed4cf569db29c8ef0ecb8253beb03d8eee294d,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757353576688092036,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327b7b6576be426c43e05ceebed39315dfe9c5369defa56c1aa708bf42535ac1,PodSandboxId:e48594ec30229fded335944f0cbf97b418ab1d38357d08776c4693d53741b34e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175735
3566748815783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f
90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757353565574491656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5
130b38a54e8,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757353565537943192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757353565514013473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757353565326963783,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757353565356091406,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567,PodSandboxId:88e98ad5b6363624d749ab496855580275c3720f64e800917730279dd62d6e51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757353449716393553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=beffff9f-fc86-413d-8fdf-31789212d637 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.290183342Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0787ae6c-4205-4f71-9eb6-b0bfbc9b45cb name=/runtime.v1.RuntimeService/Version
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.290293845Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0787ae6c-4205-4f71-9eb6-b0bfbc9b45cb name=/runtime.v1.RuntimeService/Version
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.291667282Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ffbcac2-fc53-4999-84f0-bc73331ef9a9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.292053347Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757353605292033797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ffbcac2-fc53-4999-84f0-bc73331ef9a9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.293016263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22d509dd-aa23-4c52-b024-504e25d9ade8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.293138467Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22d509dd-aa23-4c52-b024-504e25d9ade8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.293529438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16e436b9bd7b21a9b36bd130cd8ea344cda06a19dda917a779a8163811c5366e,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757353593718394484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccda1b726e4d850914c07874d01efb6cd70d3958c4ff375fe9129434c7a904bf,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757353589963125191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194e1ca66696d558ac6b8630dab91ff0537adf6ad4e3c214089e239178fdf36b,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757353589925068783,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernet
es.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bf55517747fb3d1d169b67e626f80cf6961f1bd4cf8064c4e0f3caa6b4d55a,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757353589907027314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee46f499922fdb4631f75c5ceaed4cf569db29c8ef0ecb8253beb03d8eee294d,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757353576688092036,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327b7b6576be426c43e05ceebed39315dfe9c5369defa56c1aa708bf42535ac1,PodSandboxId:e48594ec30229fded335944f0cbf97b418ab1d38357d08776c4693d53741b34e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175735
3566748815783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f
90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757353565574491656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5
130b38a54e8,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757353565537943192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757353565514013473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757353565326963783,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757353565356091406,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567,PodSandboxId:88e98ad5b6363624d749ab496855580275c3720f64e800917730279dd62d6e51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757353449716393553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22d509dd-aa23-4c52-b024-504e25d9ade8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.339901099Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b2b6841-6add-44d6-8a66-170663aad336 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.339989492Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b2b6841-6add-44d6-8a66-170663aad336 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.342051776Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d70778f-f11c-4d29-a6de-a8129ddd8fa4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.342537963Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757353605342504135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d70778f-f11c-4d29-a6de-a8129ddd8fa4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.343257368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5249ccc5-9d4c-4fc3-816b-099a1767736f name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.343508749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5249ccc5-9d4c-4fc3-816b-099a1767736f name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:45 pause-582402 crio[2548]: time="2025-09-08 17:46:45.344889282Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16e436b9bd7b21a9b36bd130cd8ea344cda06a19dda917a779a8163811c5366e,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757353593718394484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccda1b726e4d850914c07874d01efb6cd70d3958c4ff375fe9129434c7a904bf,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757353589963125191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194e1ca66696d558ac6b8630dab91ff0537adf6ad4e3c214089e239178fdf36b,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757353589925068783,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernet
es.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bf55517747fb3d1d169b67e626f80cf6961f1bd4cf8064c4e0f3caa6b4d55a,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757353589907027314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee46f499922fdb4631f75c5ceaed4cf569db29c8ef0ecb8253beb03d8eee294d,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757353576688092036,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327b7b6576be426c43e05ceebed39315dfe9c5369defa56c1aa708bf42535ac1,PodSandboxId:e48594ec30229fded335944f0cbf97b418ab1d38357d08776c4693d53741b34e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175735
3566748815783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f
90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757353565574491656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5
130b38a54e8,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757353565537943192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757353565514013473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757353565326963783,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757353565356091406,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567,PodSandboxId:88e98ad5b6363624d749ab496855580275c3720f64e800917730279dd62d6e51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757353449716393553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5249ccc5-9d4c-4fc3-816b-099a1767736f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	16e436b9bd7b2       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   11 seconds ago      Running             kube-proxy                2                   f04aac971a45b       kube-proxy-9ld9z
	ccda1b726e4d8       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   15 seconds ago      Running             kube-apiserver            2                   de2d9fa8364e7       kube-apiserver-pause-582402
	194e1ca66696d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   15 seconds ago      Running             kube-controller-manager   2                   ed08660f37683       kube-controller-manager-pause-582402
	f7bf55517747f       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   15 seconds ago      Running             kube-scheduler            2                   9e78c9867194f       kube-scheduler-pause-582402
	ee46f499922fd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   28 seconds ago      Running             etcd                      2                   20fb8bd0d62b9       etcd-pause-582402
	327b7b6576be4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   38 seconds ago      Running             coredns                   1                   e48594ec30229       coredns-66bc5c9577-c2tlk
	c2b9ba65561f5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   39 seconds ago      Exited              etcd                      1                   20fb8bd0d62b9       etcd-pause-582402
	38b1c91b613d1       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   39 seconds ago      Exited              kube-scheduler            1                   9e78c9867194f       kube-scheduler-pause-582402
	57c683e60f5c3       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   39 seconds ago      Exited              kube-apiserver            1                   de2d9fa8364e7       kube-apiserver-pause-582402
	c589b89a7aa0d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   40 seconds ago      Exited              kube-controller-manager   1                   ed08660f37683       kube-controller-manager-pause-582402
	3e24256763b07       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   40 seconds ago      Exited              kube-proxy                1                   f04aac971a45b       kube-proxy-9ld9z
	6d1f0f35bb8d6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   2 minutes ago       Exited              coredns                   0                   88e98ad5b6363       coredns-66bc5c9577-c2tlk
	
	
	==> coredns [327b7b6576be426c43e05ceebed39315dfe9c5369defa56c1aa708bf42535ac1] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56460 - 6091 "HINFO IN 8912549566450987198.2962805177611349679. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033537603s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39250 - 2984 "HINFO IN 8293307131784017457.3055633146308688108. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036101816s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-582402
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-582402
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6
	                    minikube.k8s.io/name=pause-582402
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T17_44_04_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 17:44:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-582402
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 17:46:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 17:46:32 +0000   Mon, 08 Sep 2025 17:43:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 17:46:32 +0000   Mon, 08 Sep 2025 17:43:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 17:46:32 +0000   Mon, 08 Sep 2025 17:43:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 17:46:32 +0000   Mon, 08 Sep 2025 17:44:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    pause-582402
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 56da0bd8121b4274a96a44431f816186
	  System UUID:                56da0bd8-121b-4274-a96a-44431f816186
	  Boot ID:                    feff45fc-33e8-489e-a216-bac44daf0199
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-c2tlk                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m37s
	  kube-system                 etcd-pause-582402                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m42s
	  kube-system                 kube-apiserver-pause-582402             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m42s
	  kube-system                 kube-controller-manager-pause-582402    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m42s
	  kube-system                 kube-proxy-9ld9z                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kube-scheduler-pause-582402             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m35s                  kube-proxy       
	  Normal  Starting                 11s                    kube-proxy       
	  Normal  Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m48s (x8 over 2m48s)  kubelet          Node pause-582402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m48s (x8 over 2m48s)  kubelet          Node pause-582402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m48s (x7 over 2m48s)  kubelet          Node pause-582402 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m42s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m41s                  kubelet          Node pause-582402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m41s                  kubelet          Node pause-582402 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m41s                  kubelet          Node pause-582402 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m41s                  kubelet          Node pause-582402 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           2m38s                  node-controller  Node pause-582402 event: Registered Node pause-582402 in Controller
	  Normal  Starting                 16s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16s (x8 over 16s)      kubelet          Node pause-582402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16s (x8 over 16s)      kubelet          Node pause-582402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16s (x7 over 16s)      kubelet          Node pause-582402 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                    node-controller  Node pause-582402 event: Registered Node pause-582402 in Controller
	
	
	==> dmesg <==
	[Sep 8 17:43] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000945] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004676] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.212135] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.096841] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.097653] kauditd_printk_skb: 74 callbacks suppressed
	[Sep 8 17:44] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.883355] kauditd_printk_skb: 19 callbacks suppressed
	[ +35.401097] kauditd_printk_skb: 183 callbacks suppressed
	[Sep 8 17:46] kauditd_printk_skb: 34 callbacks suppressed
	[ +11.085947] kauditd_printk_skb: 254 callbacks suppressed
	[  +0.138335] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.031852] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd] <==
	{"level":"warn","ts":"2025-09-08T17:46:06.385947Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"warn","ts":"2025-09-08T17:46:06.388810Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	{"level":"info","ts":"2025-09-08T17:46:06.390635Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.196:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.196:2380","--initial-cluster=pause-582402=https://192.168.39.196:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.196:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.196:2380","--name=pause-582402","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s
"]}
	{"level":"info","ts":"2025-09-08T17:46:06.390801Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2025-09-08T17:46:06.390832Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-09-08T17:46:06.390856Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.196:2380"]}
	{"level":"info","ts":"2025-09-08T17:46:06.390885Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-08T17:46:06.391498Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.196:2379"]}
	{"level":"info","ts":"2025-09-08T17:46:06.405643Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.4","git-sha":"5400cdc","go-version":"go1.23.11","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-582402","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.196:2380"],"listen-peer-urls":["https://192.168.39.196:2380"],"advertise-client-urls":["https://192.168.39.196:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.196:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-clu
ster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	{"level":"info","ts":"2025-09-08T17:46:06.406352Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00011e7c8}"}
	{"level":"info","ts":"2025-09-08T17:46:06.458996Z","logger":"bbolt","caller":"bbolt@v1.4.2/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	{"level":"info","ts":"2025-09-08T17:46:06.463070Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"56.805996ms"}
	{"level":"info","ts":"2025-09-08T17:46:06.463162Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":487}
	{"level":"info","ts":"2025-09-08T17:46:06.508895Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-09-08T17:46:06.515128Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":901120,"backend-size":"901 kB","backend-size-in-use-bytes":884736,"backend-size-in-use":"885 kB"}
	{"level":"info","ts":"2025-09-08T17:46:06.518410Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	{"level":"info","ts":"2025-09-08T17:46:06.549170Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","commit-index":487}
	{"level":"info","ts":"2025-09-08T17:46:06.561671Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	{"level":"info","ts":"2025-09-08T17:46:06.564066Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	{"level":"info","ts":"2025-09-08T17:46:06.568313Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:a14f9258d3b66c75 RaftAttributes:{PeerURLs:[https://192.168.39.196:2380] IsLearner:false} Attributes:{Name:pause-582402 ClientURLs:[https://192.168.39.196:2379]}}"}
	
	
	==> etcd [ee46f499922fdb4631f75c5ceaed4cf569db29c8ef0ecb8253beb03d8eee294d] <==
	{"level":"warn","ts":"2025-09-08T17:46:31.312843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.340450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.350318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.363070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.377221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.404790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.430523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.439624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.453277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.474693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.476391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.491045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.502416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.525622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.536306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.552786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.561529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.576453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.585133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.619942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.648637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.659715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.668541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.683730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.744182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:46:45 up 3 min,  0 users,  load average: 0.59, 0.36, 0.15
	Linux pause-582402 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76] <==
	{"level":"warn","ts":"2025-09-08T17:46:26.056422Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":78,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.080143Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":79,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.106685Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":80,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.129527Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":81,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.154023Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":82,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.180291Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":83,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.207063Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":84,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.232184Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":85,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.257770Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":86,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.282733Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":87,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.307312Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":88,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.331377Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":89,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.354811Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":90,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.378089Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":91,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.402147Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":92,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.428433Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":93,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.456121Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":94,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.479804Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":95,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.507164Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":96,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.533733Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":97,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.559711Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":98,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.585422Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":99,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	E0908 17:46:26.585508       1 controller.go:97] Error removing old endpoints from kubernetes service: rpc error: code = Canceled desc = grpc: the client connection is closing
	E0908 17:46:26.680400       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0908 17:46:26.681032       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-apiserver [ccda1b726e4d850914c07874d01efb6cd70d3958c4ff375fe9129434c7a904bf] <==
	I0908 17:46:32.555687       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0908 17:46:32.556178       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0908 17:46:32.563398       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0908 17:46:32.563805       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0908 17:46:32.563966       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0908 17:46:32.563997       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0908 17:46:32.564013       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0908 17:46:32.564163       1 aggregator.go:171] initial CRD sync complete...
	I0908 17:46:32.564174       1 autoregister_controller.go:144] Starting autoregister controller
	I0908 17:46:32.564182       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0908 17:46:32.564189       1 cache.go:39] Caches are synced for autoregister controller
	I0908 17:46:32.564422       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0908 17:46:32.564464       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0908 17:46:32.574775       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0908 17:46:32.574859       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I0908 17:46:32.574949       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0908 17:46:33.362295       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0908 17:46:33.456300       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W0908 17:46:33.794207       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.196]
	I0908 17:46:33.797274       1 controller.go:667] quota admission added evaluator for: endpoints
	I0908 17:46:33.815135       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 17:46:34.251601       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0908 17:46:34.320344       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0908 17:46:34.356317       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0908 17:46:34.366434       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [194e1ca66696d558ac6b8630dab91ff0537adf6ad4e3c214089e239178fdf36b] <==
	I0908 17:46:35.875004       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 17:46:35.880057       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 17:46:35.882327       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 17:46:35.886667       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 17:46:35.889115       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0908 17:46:35.889316       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 17:46:35.889422       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-582402"
	I0908 17:46:35.889462       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 17:46:35.891647       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 17:46:35.891997       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0908 17:46:35.893317       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 17:46:35.893412       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 17:46:35.893422       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 17:46:35.893430       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 17:46:35.893731       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 17:46:35.894032       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 17:46:35.894160       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0908 17:46:35.894169       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 17:46:35.894180       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 17:46:35.908451       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 17:46:35.908496       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 17:46:35.908508       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 17:46:35.916788       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 17:46:35.916804       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 17:46:35.926679       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251] <==
	I0908 17:46:07.393897       1 serving.go:386] Generated self-signed cert in-memory
	I0908 17:46:08.076342       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0908 17:46:08.076398       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 17:46:08.079765       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0908 17:46:08.079909       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0908 17:46:08.080077       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0908 17:46:08.080695       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [16e436b9bd7b21a9b36bd130cd8ea344cda06a19dda917a779a8163811c5366e] <==
	I0908 17:46:33.920187       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 17:46:34.021719       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 17:46:34.021766       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.196"]
	E0908 17:46:34.021834       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 17:46:34.099077       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 17:46:34.099146       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 17:46:34.099169       1 server_linux.go:132] "Using iptables Proxier"
	I0908 17:46:34.119772       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 17:46:34.120134       1 server.go:527] "Version info" version="v1.34.0"
	I0908 17:46:34.121052       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 17:46:34.125984       1 config.go:200] "Starting service config controller"
	I0908 17:46:34.126045       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 17:46:34.126075       1 config.go:106] "Starting endpoint slice config controller"
	I0908 17:46:34.126090       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 17:46:34.126111       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 17:46:34.126125       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 17:46:34.126779       1 config.go:309] "Starting node config controller"
	I0908 17:46:34.128941       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 17:46:34.129234       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 17:46:34.226894       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 17:46:34.227004       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 17:46:34.227430       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14] <==
	I0908 17:46:24.081418       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 17:46:24.081508       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 17:46:24.081551       1 server_linux.go:132] "Using iptables Proxier"
	I0908 17:46:24.096420       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 17:46:24.097241       1 server.go:527] "Version info" version="v1.34.0"
	I0908 17:46:24.097294       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 17:46:24.105153       1 config.go:200] "Starting service config controller"
	I0908 17:46:24.106658       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 17:46:24.105553       1 config.go:106] "Starting endpoint slice config controller"
	I0908 17:46:24.106707       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 17:46:24.105623       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 17:46:24.106717       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	E0908 17:46:24.106272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.196:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E0908 17:46:24.106335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.196:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E0908 17:46:24.106386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.196:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0908 17:46:24.106449       1 config.go:309] "Starting node config controller"
	I0908 17:46:24.106735       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 17:46:24.106739       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E0908 17:46:24.107082       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.196:8443: connect: connection refused"
	E0908 17:46:25.234522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.196:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E0908 17:46:25.268730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.196:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 17:46:25.405050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.196:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	
	
	==> kube-scheduler [38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5130b38a54e8] <==
	I0908 17:46:07.843863       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [f7bf55517747fb3d1d169b67e626f80cf6961f1bd4cf8064c4e0f3caa6b4d55a] <==
	I0908 17:46:31.667773       1 serving.go:386] Generated self-signed cert in-memory
	W0908 17:46:32.450033       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 17:46:32.452709       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 17:46:32.452780       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 17:46:32.452801       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 17:46:32.503183       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 17:46:32.503222       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 17:46:32.512069       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 17:46:32.512417       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 17:46:32.514645       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 17:46:32.514778       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0908 17:46:32.523128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I0908 17:46:32.615032       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.613515    3664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: E0908 17:46:32.629342    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-582402\" already exists" pod="kube-system/kube-scheduler-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: E0908 17:46:32.629912    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-582402\" already exists" pod="kube-system/kube-apiserver-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.630363    3664 kubelet_node_status.go:124] "Node was previously registered" node="pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.630552    3664 kubelet_node_status.go:78] "Successfully registered node" node="pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.630670    3664 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.631987    3664 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: E0908 17:46:32.648051    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-582402\" already exists" pod="kube-system/etcd-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.648075    3664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: E0908 17:46:32.671285    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-582402\" already exists" pod="kube-system/kube-apiserver-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.671512    3664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: E0908 17:46:32.688118    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-582402\" already exists" pod="kube-system/kube-controller-manager-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.688253    3664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: E0908 17:46:32.702784    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-582402\" already exists" pod="kube-system/kube-scheduler-pause-582402"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: I0908 17:46:33.216069    3664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-582402"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: E0908 17:46:33.227121    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-582402\" already exists" pod="kube-system/etcd-pause-582402"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: I0908 17:46:33.400835    3664 apiserver.go:52] "Watching apiserver"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: I0908 17:46:33.425474    3664 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: I0908 17:46:33.449802    3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff614778-696c-4113-9693-970eea6f5d45-xtables-lock\") pod \"kube-proxy-9ld9z\" (UID: \"ff614778-696c-4113-9693-970eea6f5d45\") " pod="kube-system/kube-proxy-9ld9z"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: I0908 17:46:33.450170    3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff614778-696c-4113-9693-970eea6f5d45-lib-modules\") pod \"kube-proxy-9ld9z\" (UID: \"ff614778-696c-4113-9693-970eea6f5d45\") " pod="kube-system/kube-proxy-9ld9z"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: I0908 17:46:33.619082    3664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-582402"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: E0908 17:46:33.633186    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-582402\" already exists" pod="kube-system/etcd-pause-582402"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: I0908 17:46:33.705491    3664 scope.go:117] "RemoveContainer" containerID="3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14"
	Sep 08 17:46:39 pause-582402 kubelet[3664]: E0908 17:46:39.588753    3664 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757353599587969576  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 08 17:46:39 pause-582402 kubelet[3664]: E0908 17:46:39.588783    3664 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757353599587969576  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-582402 -n pause-582402
helpers_test.go:269: (dbg) Run:  kubectl --context pause-582402 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-582402 -n pause-582402
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-582402 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-582402 logs -n 25: (1.465529965s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-387181 sudo systemctl status kubelet --all --full --no-pager                                                                 │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo systemctl cat kubelet --no-pager                                                                                 │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo journalctl -xeu kubelet --all --full --no-pager                                                                  │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo cat /etc/kubernetes/kubelet.conf                                                                                 │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo cat /var/lib/kubelet/config.yaml                                                                                 │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo systemctl status docker --all --full --no-pager                                                                  │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │                     │
	│ ssh     │ -p auto-387181 sudo systemctl cat docker --no-pager                                                                                  │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo cat /etc/docker/daemon.json                                                                                      │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo docker system info                                                                                               │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │                     │
	│ ssh     │ -p auto-387181 sudo systemctl status cri-docker --all --full --no-pager                                                              │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │                     │
	│ ssh     │ -p auto-387181 sudo systemctl cat cri-docker --no-pager                                                                              │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                         │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │                     │
	│ ssh     │ -p auto-387181 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                   │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo cri-dockerd --version                                                                                            │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo systemctl status containerd --all --full --no-pager                                                              │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │                     │
	│ ssh     │ -p auto-387181 sudo systemctl cat containerd --no-pager                                                                              │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo cat /lib/systemd/system/containerd.service                                                                       │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo cat /etc/containerd/config.toml                                                                                  │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo containerd config dump                                                                                           │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo systemctl status crio --all --full --no-pager                                                                    │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo systemctl cat crio --no-pager                                                                                    │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                          │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ ssh     │ -p auto-387181 sudo crio config                                                                                                      │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ delete  │ -p auto-387181                                                                                                                       │ auto-387181   │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │ 08 Sep 25 17:46 UTC │
	│ start   │ -p bridge-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio │ bridge-387181 │ jenkins │ v1.36.0 │ 08 Sep 25 17:46 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 17:46:25
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 17:46:25.848328   54844 out.go:360] Setting OutFile to fd 1 ...
	I0908 17:46:25.848434   54844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:46:25.848440   54844 out.go:374] Setting ErrFile to fd 2...
	I0908 17:46:25.848447   54844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:46:25.848638   54844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
	I0908 17:46:25.849246   54844 out.go:368] Setting JSON to false
	I0908 17:46:25.850315   54844 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5329,"bootTime":1757348257,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 17:46:25.850400   54844 start.go:140] virtualization: kvm guest
	I0908 17:46:25.852421   54844 out.go:179] * [bridge-387181] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 17:46:25.853752   54844 notify.go:220] Checking for updates...
	I0908 17:46:25.853769   54844 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 17:46:25.855120   54844 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 17:46:25.856317   54844 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 17:46:25.857599   54844 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 17:46:25.858876   54844 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 17:46:25.860076   54844 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 17:46:25.861698   54844 config.go:182] Loaded profile config "enable-default-cni-387181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:46:25.861791   54844 config.go:182] Loaded profile config "flannel-387181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:46:25.861912   54844 config.go:182] Loaded profile config "pause-582402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:46:25.862014   54844 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 17:46:25.901591   54844 out.go:179] * Using the kvm2 driver based on user configuration
	W0908 17:46:20.940856   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:22.942786   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:25.439765   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	I0908 17:46:25.902778   54844 start.go:304] selected driver: kvm2
	I0908 17:46:25.902794   54844 start.go:918] validating driver "kvm2" against <nil>
	I0908 17:46:25.902819   54844 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 17:46:25.903511   54844 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 17:46:25.903589   54844 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21504-7629/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 17:46:25.920736   54844 install.go:137] /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 17:46:25.920798   54844 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 17:46:25.921083   54844 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 17:46:25.921118   54844 cni.go:84] Creating CNI manager for "bridge"
	I0908 17:46:25.921127   54844 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 17:46:25.921182   54844 start.go:348] cluster config:
	{Name:bridge-387181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-387181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0908 17:46:25.921297   54844 iso.go:125] acquiring lock: {Name:mkaf49872b434993209a65bf0f93ea3e4c6d93b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 17:46:25.923996   54844 out.go:179] * Starting "bridge-387181" primary control-plane node in "bridge-387181" cluster
	I0908 17:46:25.925289   54844 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 17:46:25.925343   54844 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 17:46:25.925357   54844 cache.go:58] Caching tarball of preloaded images
	I0908 17:46:25.925457   54844 preload.go:172] Found /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 17:46:25.925473   54844 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 17:46:25.925563   54844 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/config.json ...
	I0908 17:46:25.925581   54844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/config.json: {Name:mka1fee2bee3480332a585fe316a7f58fdee8bc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 17:46:25.925753   54844 start.go:360] acquireMachinesLock for bridge-387181: {Name:mka7c3ca4a3e37e9483e7804183d91c6725d32e4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0908 17:46:25.925789   54844 start.go:364] duration metric: took 19.22µs to acquireMachinesLock for "bridge-387181"
	I0908 17:46:25.925812   54844 start.go:93] Provisioning new machine with config: &{Name:bridge-387181 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:bridge-387181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 17:46:25.925880   54844 start.go:125] createHost starting for "" (driver="kvm2")
	W0908 17:46:24.210683   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	W0908 17:46:26.708701   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	I0908 17:46:25.928336   54844 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0908 17:46:25.928502   54844 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:46:25.928559   54844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:46:25.946805   54844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0908 17:46:25.947339   54844 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:46:25.947863   54844 main.go:141] libmachine: Using API Version  1
	I0908 17:46:25.947884   54844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:46:25.948261   54844 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:46:25.948478   54844 main.go:141] libmachine: (bridge-387181) Calling .GetMachineName
	I0908 17:46:25.948672   54844 main.go:141] libmachine: (bridge-387181) Calling .DriverName
	I0908 17:46:25.948846   54844 start.go:159] libmachine.API.Create for "bridge-387181" (driver="kvm2")
	I0908 17:46:25.948877   54844 client.go:168] LocalClient.Create starting
	I0908 17:46:25.948907   54844 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21504-7629/.minikube/certs/ca.pem
	I0908 17:46:25.948940   54844 main.go:141] libmachine: Decoding PEM data...
	I0908 17:46:25.948957   54844 main.go:141] libmachine: Parsing certificate...
	I0908 17:46:25.949020   54844 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21504-7629/.minikube/certs/cert.pem
	I0908 17:46:25.949044   54844 main.go:141] libmachine: Decoding PEM data...
	I0908 17:46:25.949057   54844 main.go:141] libmachine: Parsing certificate...
	I0908 17:46:25.949072   54844 main.go:141] libmachine: Running pre-create checks...
	I0908 17:46:25.949080   54844 main.go:141] libmachine: (bridge-387181) Calling .PreCreateCheck
	I0908 17:46:25.949364   54844 main.go:141] libmachine: (bridge-387181) Calling .GetConfigRaw
	I0908 17:46:25.949709   54844 main.go:141] libmachine: Creating machine...
	I0908 17:46:25.949728   54844 main.go:141] libmachine: (bridge-387181) Calling .Create
	I0908 17:46:25.949929   54844 main.go:141] libmachine: (bridge-387181) creating KVM machine...
	I0908 17:46:25.949945   54844 main.go:141] libmachine: (bridge-387181) creating network...
	I0908 17:46:25.951425   54844 main.go:141] libmachine: (bridge-387181) DBG | found existing default KVM network
	I0908 17:46:25.952310   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:25.952161   54867 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b2:27:32} reservation:<nil>}
	I0908 17:46:25.953385   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:25.953293   54867 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011ba50}
	I0908 17:46:25.953413   54844 main.go:141] libmachine: (bridge-387181) DBG | created network xml: 
	I0908 17:46:25.953426   54844 main.go:141] libmachine: (bridge-387181) DBG | <network>
	I0908 17:46:25.953433   54844 main.go:141] libmachine: (bridge-387181) DBG |   <name>mk-bridge-387181</name>
	I0908 17:46:25.953443   54844 main.go:141] libmachine: (bridge-387181) DBG |   <dns enable='no'/>
	I0908 17:46:25.953455   54844 main.go:141] libmachine: (bridge-387181) DBG |   
	I0908 17:46:25.953468   54844 main.go:141] libmachine: (bridge-387181) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0908 17:46:25.953480   54844 main.go:141] libmachine: (bridge-387181) DBG |     <dhcp>
	I0908 17:46:25.953500   54844 main.go:141] libmachine: (bridge-387181) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0908 17:46:25.953514   54844 main.go:141] libmachine: (bridge-387181) DBG |     </dhcp>
	I0908 17:46:25.953525   54844 main.go:141] libmachine: (bridge-387181) DBG |   </ip>
	I0908 17:46:25.953532   54844 main.go:141] libmachine: (bridge-387181) DBG |   
	I0908 17:46:25.953540   54844 main.go:141] libmachine: (bridge-387181) DBG | </network>
	I0908 17:46:25.953546   54844 main.go:141] libmachine: (bridge-387181) DBG | 
	I0908 17:46:25.959032   54844 main.go:141] libmachine: (bridge-387181) DBG | trying to create private KVM network mk-bridge-387181 192.168.50.0/24...
	I0908 17:46:26.035206   54844 main.go:141] libmachine: (bridge-387181) DBG | private KVM network mk-bridge-387181 192.168.50.0/24 created
	I0908 17:46:26.035277   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:26.035168   54867 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 17:46:26.035292   54844 main.go:141] libmachine: (bridge-387181) setting up store path in /home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181 ...
	I0908 17:46:26.035323   54844 main.go:141] libmachine: (bridge-387181) building disk image from file:///home/jenkins/minikube-integration/21504-7629/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0908 17:46:26.035342   54844 main.go:141] libmachine: (bridge-387181) Downloading /home/jenkins/minikube-integration/21504-7629/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21504-7629/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0908 17:46:26.320658   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:26.320517   54867 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181/id_rsa...
	I0908 17:46:26.527875   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:26.527712   54867 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181/bridge-387181.rawdisk...
	I0908 17:46:26.527911   54844 main.go:141] libmachine: (bridge-387181) DBG | Writing magic tar header
	I0908 17:46:26.527926   54844 main.go:141] libmachine: (bridge-387181) DBG | Writing SSH key tar header
	I0908 17:46:26.527938   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:26.527822   54867 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181 ...
	I0908 17:46:26.527950   54844 main.go:141] libmachine: (bridge-387181) setting executable bit set on /home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181 (perms=drwx------)
	I0908 17:46:26.527964   54844 main.go:141] libmachine: (bridge-387181) setting executable bit set on /home/jenkins/minikube-integration/21504-7629/.minikube/machines (perms=drwxr-xr-x)
	I0908 17:46:26.527975   54844 main.go:141] libmachine: (bridge-387181) setting executable bit set on /home/jenkins/minikube-integration/21504-7629/.minikube (perms=drwxr-xr-x)
	I0908 17:46:26.528005   54844 main.go:141] libmachine: (bridge-387181) setting executable bit set on /home/jenkins/minikube-integration/21504-7629 (perms=drwxrwxr-x)
	I0908 17:46:26.528021   54844 main.go:141] libmachine: (bridge-387181) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0908 17:46:26.528032   54844 main.go:141] libmachine: (bridge-387181) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181
	I0908 17:46:26.528048   54844 main.go:141] libmachine: (bridge-387181) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21504-7629/.minikube/machines
	I0908 17:46:26.528057   54844 main.go:141] libmachine: (bridge-387181) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 17:46:26.528068   54844 main.go:141] libmachine: (bridge-387181) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21504-7629
	I0908 17:46:26.528084   54844 main.go:141] libmachine: (bridge-387181) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0908 17:46:26.528096   54844 main.go:141] libmachine: (bridge-387181) DBG | checking permissions on dir: /home/jenkins
	I0908 17:46:26.528113   54844 main.go:141] libmachine: (bridge-387181) DBG | checking permissions on dir: /home
	I0908 17:46:26.528124   54844 main.go:141] libmachine: (bridge-387181) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0908 17:46:26.528134   54844 main.go:141] libmachine: (bridge-387181) DBG | skipping /home - not owner
	I0908 17:46:26.528151   54844 main.go:141] libmachine: (bridge-387181) creating domain...
	I0908 17:46:26.529360   54844 main.go:141] libmachine: (bridge-387181) define libvirt domain using xml: 
	I0908 17:46:26.529382   54844 main.go:141] libmachine: (bridge-387181) <domain type='kvm'>
	I0908 17:46:26.529389   54844 main.go:141] libmachine: (bridge-387181)   <name>bridge-387181</name>
	I0908 17:46:26.529398   54844 main.go:141] libmachine: (bridge-387181)   <memory unit='MiB'>3072</memory>
	I0908 17:46:26.529404   54844 main.go:141] libmachine: (bridge-387181)   <vcpu>2</vcpu>
	I0908 17:46:26.529409   54844 main.go:141] libmachine: (bridge-387181)   <features>
	I0908 17:46:26.529414   54844 main.go:141] libmachine: (bridge-387181)     <acpi/>
	I0908 17:46:26.529444   54844 main.go:141] libmachine: (bridge-387181)     <apic/>
	I0908 17:46:26.529455   54844 main.go:141] libmachine: (bridge-387181)     <pae/>
	I0908 17:46:26.529465   54844 main.go:141] libmachine: (bridge-387181)     
	I0908 17:46:26.529473   54844 main.go:141] libmachine: (bridge-387181)   </features>
	I0908 17:46:26.529485   54844 main.go:141] libmachine: (bridge-387181)   <cpu mode='host-passthrough'>
	I0908 17:46:26.529502   54844 main.go:141] libmachine: (bridge-387181)   
	I0908 17:46:26.529506   54844 main.go:141] libmachine: (bridge-387181)   </cpu>
	I0908 17:46:26.529510   54844 main.go:141] libmachine: (bridge-387181)   <os>
	I0908 17:46:26.529515   54844 main.go:141] libmachine: (bridge-387181)     <type>hvm</type>
	I0908 17:46:26.529521   54844 main.go:141] libmachine: (bridge-387181)     <boot dev='cdrom'/>
	I0908 17:46:26.529547   54844 main.go:141] libmachine: (bridge-387181)     <boot dev='hd'/>
	I0908 17:46:26.529566   54844 main.go:141] libmachine: (bridge-387181)     <bootmenu enable='no'/>
	I0908 17:46:26.529572   54844 main.go:141] libmachine: (bridge-387181)   </os>
	I0908 17:46:26.529576   54844 main.go:141] libmachine: (bridge-387181)   <devices>
	I0908 17:46:26.529584   54844 main.go:141] libmachine: (bridge-387181)     <disk type='file' device='cdrom'>
	I0908 17:46:26.529602   54844 main.go:141] libmachine: (bridge-387181)       <source file='/home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181/boot2docker.iso'/>
	I0908 17:46:26.529610   54844 main.go:141] libmachine: (bridge-387181)       <target dev='hdc' bus='scsi'/>
	I0908 17:46:26.529614   54844 main.go:141] libmachine: (bridge-387181)       <readonly/>
	I0908 17:46:26.529619   54844 main.go:141] libmachine: (bridge-387181)     </disk>
	I0908 17:46:26.529625   54844 main.go:141] libmachine: (bridge-387181)     <disk type='file' device='disk'>
	I0908 17:46:26.529632   54844 main.go:141] libmachine: (bridge-387181)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0908 17:46:26.529641   54844 main.go:141] libmachine: (bridge-387181)       <source file='/home/jenkins/minikube-integration/21504-7629/.minikube/machines/bridge-387181/bridge-387181.rawdisk'/>
	I0908 17:46:26.529646   54844 main.go:141] libmachine: (bridge-387181)       <target dev='hda' bus='virtio'/>
	I0908 17:46:26.529657   54844 main.go:141] libmachine: (bridge-387181)     </disk>
	I0908 17:46:26.529665   54844 main.go:141] libmachine: (bridge-387181)     <interface type='network'>
	I0908 17:46:26.529672   54844 main.go:141] libmachine: (bridge-387181)       <source network='mk-bridge-387181'/>
	I0908 17:46:26.529680   54844 main.go:141] libmachine: (bridge-387181)       <model type='virtio'/>
	I0908 17:46:26.529691   54844 main.go:141] libmachine: (bridge-387181)     </interface>
	I0908 17:46:26.529696   54844 main.go:141] libmachine: (bridge-387181)     <interface type='network'>
	I0908 17:46:26.529703   54844 main.go:141] libmachine: (bridge-387181)       <source network='default'/>
	I0908 17:46:26.529708   54844 main.go:141] libmachine: (bridge-387181)       <model type='virtio'/>
	I0908 17:46:26.529714   54844 main.go:141] libmachine: (bridge-387181)     </interface>
	I0908 17:46:26.529719   54844 main.go:141] libmachine: (bridge-387181)     <serial type='pty'>
	I0908 17:46:26.529723   54844 main.go:141] libmachine: (bridge-387181)       <target port='0'/>
	I0908 17:46:26.529750   54844 main.go:141] libmachine: (bridge-387181)     </serial>
	I0908 17:46:26.529774   54844 main.go:141] libmachine: (bridge-387181)     <console type='pty'>
	I0908 17:46:26.529786   54844 main.go:141] libmachine: (bridge-387181)       <target type='serial' port='0'/>
	I0908 17:46:26.529796   54844 main.go:141] libmachine: (bridge-387181)     </console>
	I0908 17:46:26.529804   54844 main.go:141] libmachine: (bridge-387181)     <rng model='virtio'>
	I0908 17:46:26.529816   54844 main.go:141] libmachine: (bridge-387181)       <backend model='random'>/dev/random</backend>
	I0908 17:46:26.529827   54844 main.go:141] libmachine: (bridge-387181)     </rng>
	I0908 17:46:26.529834   54844 main.go:141] libmachine: (bridge-387181)     
	I0908 17:46:26.529845   54844 main.go:141] libmachine: (bridge-387181)     
	I0908 17:46:26.529853   54844 main.go:141] libmachine: (bridge-387181)   </devices>
	I0908 17:46:26.529858   54844 main.go:141] libmachine: (bridge-387181) </domain>
	I0908 17:46:26.529864   54844 main.go:141] libmachine: (bridge-387181) 
	I0908 17:46:26.534136   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:f7:12:48 in network default
	I0908 17:46:26.534730   54844 main.go:141] libmachine: (bridge-387181) starting domain...
	I0908 17:46:26.534758   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:26.534766   54844 main.go:141] libmachine: (bridge-387181) ensuring networks are active...
	I0908 17:46:26.535458   54844 main.go:141] libmachine: (bridge-387181) Ensuring network default is active
	I0908 17:46:26.535851   54844 main.go:141] libmachine: (bridge-387181) Ensuring network mk-bridge-387181 is active
	I0908 17:46:26.536392   54844 main.go:141] libmachine: (bridge-387181) getting domain XML...
	I0908 17:46:26.537239   54844 main.go:141] libmachine: (bridge-387181) creating domain...
	I0908 17:46:27.876600   54844 main.go:141] libmachine: (bridge-387181) waiting for IP...
	I0908 17:46:27.877456   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:27.877964   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:27.878040   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:27.877966   54867 retry.go:31] will retry after 259.198907ms: waiting for domain to come up
	I0908 17:46:28.138640   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:28.139234   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:28.139265   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:28.139207   54867 retry.go:31] will retry after 308.03493ms: waiting for domain to come up
	I0908 17:46:28.448874   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:28.449395   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:28.449462   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:28.449372   54867 retry.go:31] will retry after 483.610435ms: waiting for domain to come up
	I0908 17:46:28.934958   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:28.935681   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:28.935708   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:28.935645   54867 retry.go:31] will retry after 409.672152ms: waiting for domain to come up
	I0908 17:46:29.347313   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:29.347932   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:29.347982   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:29.347924   54867 retry.go:31] will retry after 645.671268ms: waiting for domain to come up
	I0908 17:46:29.995830   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:29.996398   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:29.996464   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:29.996376   54867 retry.go:31] will retry after 742.214804ms: waiting for domain to come up
	I0908 17:46:30.740021   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:30.740569   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:30.740602   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:30.740557   54867 retry.go:31] will retry after 1.104415458s: waiting for domain to come up
	W0908 17:46:27.439979   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:29.440873   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	I0908 17:46:27.242275   52672 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd 38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5130b38a54e8 57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76 c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251 3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14 6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567 3a809949aa75c8b15560bcb94da3b52ee69714288ea99f94d520cd42109caa70 87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d 4230c39d270b7cfaea11cf53a694273e2366581fe2c9e43e766ef969ea14962a 4fc28498bd5596d1c10917ff34757e7634be86e000adead59a5fa200e1f4f71a aecdc8609890df363dc852143482e4e4ad10e31c2f12fb490bf8dc1783d5bb65: (20.738033201s)
	W0908 17:46:27.242355   52672 kubeadm.go:640] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd 38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5130b38a54e8 57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76 c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251 3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14 6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567 3a809949aa75c8b15560bcb94da3b52ee69714288ea99f94d520cd42109caa70 87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d 4230c39d270b7cfaea11cf53a694273e2366581fe2c9e43e766ef969ea14962a 4fc28498bd5596d1c10917ff34757e7634be86e000adead59a5fa200e1f4f71a aecdc8609890df363dc852143482e4e4ad10e31c2f12fb490bf8dc1783d5bb65: Process exited with status 1
	stdout:
	c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd
	38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5130b38a54e8
	57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76
	c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251
	3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14
	6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567
	3a809949aa75c8b15560bcb94da3b52ee69714288ea99f94d520cd42109caa70
	
	stderr:
	E0908 17:46:27.234846    3313 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d\": container with ID starting with 87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d not found: ID does not exist" containerID="87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d"
	time="2025-09-08T17:46:27Z" level=fatal msg="stopping the container \"87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d\": rpc error: code = NotFound desc = could not find container \"87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d\": container with ID starting with 87157ec692ce1976e70f546d271d9fa6fb8146c26b8a1f39c22e0972c855973d not found: ID does not exist"
	I0908 17:46:27.242421   52672 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0908 17:46:27.288132   52672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 17:46:27.305174   52672 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Sep  8 17:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5638 Sep  8 17:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Sep  8 17:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5586 Sep  8 17:43 /etc/kubernetes/scheduler.conf
	
	I0908 17:46:27.305243   52672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 17:46:27.318443   52672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 17:46:27.330222   52672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 17:46:27.330296   52672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 17:46:27.344364   52672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 17:46:27.356811   52672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 17:46:27.356888   52672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 17:46:27.369803   52672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 17:46:27.383147   52672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 17:46:27.383206   52672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 17:46:27.397838   52672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 17:46:27.411850   52672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:46:27.475473   52672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:46:28.978315   52672 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.50279868s)
	I0908 17:46:28.978353   52672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:46:29.295854   52672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:46:29.376398   52672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:46:29.482101   52672 api_server.go:52] waiting for apiserver process to appear ...
	I0908 17:46:29.482182   52672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:46:29.982907   52672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:46:30.483250   52672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:46:30.519087   52672 api_server.go:72] duration metric: took 1.036983742s to wait for apiserver process to appear ...
	I0908 17:46:30.519172   52672 api_server.go:88] waiting for apiserver healthz status ...
	I0908 17:46:30.519202   52672 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	W0908 17:46:28.708888   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	W0908 17:46:30.710813   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	W0908 17:46:33.209675   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	I0908 17:46:32.419724   52672 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 17:46:32.419759   52672 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 17:46:32.419781   52672 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0908 17:46:32.450628   52672 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 17:46:32.450690   52672 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 17:46:32.520000   52672 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0908 17:46:32.534842   52672 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 17:46:32.534876   52672 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 17:46:33.019342   52672 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0908 17:46:33.025721   52672 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 17:46:33.025752   52672 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 17:46:33.519338   52672 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0908 17:46:33.528332   52672 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 17:46:33.528375   52672 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 17:46:34.019298   52672 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0908 17:46:34.029770   52672 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0908 17:46:34.042524   52672 api_server.go:141] control plane version: v1.34.0
	I0908 17:46:34.042571   52672 api_server.go:131] duration metric: took 3.523386145s to wait for apiserver health ...
	I0908 17:46:34.042583   52672 cni.go:84] Creating CNI manager for ""
	I0908 17:46:34.042593   52672 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 17:46:34.044031   52672 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 17:46:34.045182   52672 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 17:46:34.068261   52672 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 17:46:34.100708   52672 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 17:46:34.113806   52672 system_pods.go:59] 6 kube-system pods found
	I0908 17:46:34.113857   52672 system_pods.go:61] "coredns-66bc5c9577-c2tlk" [6737e7ba-9abe-4fb0-92b9-28b32bb89ce8] Running
	I0908 17:46:34.113868   52672 system_pods.go:61] "etcd-pause-582402" [0079a3c1-312e-4117-8543-ba500931a7ba] Running
	I0908 17:46:34.113883   52672 system_pods.go:61] "kube-apiserver-pause-582402" [2258819d-bf5c-41e4-9357-2bc8d7ee5bc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 17:46:34.113894   52672 system_pods.go:61] "kube-controller-manager-pause-582402" [86bc4537-33f8-47ff-9941-c6fee4f6560f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 17:46:34.113910   52672 system_pods.go:61] "kube-proxy-9ld9z" [ff614778-696c-4113-9693-970eea6f5d45] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 17:46:34.113921   52672 system_pods.go:61] "kube-scheduler-pause-582402" [30c4c4e9-a2c8-4489-8d3f-b804341119d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 17:46:34.113934   52672 system_pods.go:74] duration metric: took 13.198432ms to wait for pod list to return data ...
	I0908 17:46:34.113949   52672 node_conditions.go:102] verifying NodePressure condition ...
	I0908 17:46:34.121054   52672 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 17:46:34.121079   52672 node_conditions.go:123] node cpu capacity is 2
	I0908 17:46:34.121088   52672 node_conditions.go:105] duration metric: took 7.135229ms to run NodePressure ...
	I0908 17:46:34.121107   52672 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 17:46:34.393982   52672 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0908 17:46:34.397790   52672 kubeadm.go:735] kubelet initialised
	I0908 17:46:34.397823   52672 kubeadm.go:736] duration metric: took 3.815267ms waiting for restarted kubelet to initialise ...
	I0908 17:46:34.397842   52672 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 17:46:34.413894   52672 ops.go:34] apiserver oom_adj: -16
	I0908 17:46:34.413917   52672 kubeadm.go:593] duration metric: took 28.078864169s to restartPrimaryControlPlane
	I0908 17:46:34.413931   52672 kubeadm.go:394] duration metric: took 28.417572615s to StartCluster
	I0908 17:46:34.413954   52672 settings.go:142] acquiring lock: {Name:mk1c22e0fe8486f74cbd8991c9b3bb6f4c36c978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 17:46:34.414039   52672 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 17:46:34.415193   52672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/kubeconfig: {Name:mkb59774845ad4e65ea2ac11e21880c504ffe601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 17:46:34.415499   52672 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 17:46:34.415586   52672 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 17:46:34.415791   52672 config.go:182] Loaded profile config "pause-582402": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:46:34.417300   52672 out.go:179] * Enabled addons: 
	I0908 17:46:34.417311   52672 out.go:179] * Verifying Kubernetes components...
	I0908 17:46:31.846133   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:31.846626   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:31.846772   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:31.846701   54867 retry.go:31] will retry after 1.135481372s: waiting for domain to come up
	I0908 17:46:32.984196   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:32.984662   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:32.984691   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:32.984648   54867 retry.go:31] will retry after 1.28455646s: waiting for domain to come up
	I0908 17:46:34.271149   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:34.271667   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:34.271698   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:34.271640   54867 retry.go:31] will retry after 1.636931145s: waiting for domain to come up
	W0908 17:46:31.442530   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:33.940787   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	I0908 17:46:34.418353   52672 addons.go:514] duration metric: took 2.779085ms for enable addons: enabled=[]
	I0908 17:46:34.418377   52672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 17:46:34.680180   52672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 17:46:34.707862   52672 node_ready.go:35] waiting up to 6m0s for node "pause-582402" to be "Ready" ...
	I0908 17:46:34.714406   52672 node_ready.go:49] node "pause-582402" is "Ready"
	I0908 17:46:34.714439   52672 node_ready.go:38] duration metric: took 6.540889ms for node "pause-582402" to be "Ready" ...
	I0908 17:46:34.714455   52672 api_server.go:52] waiting for apiserver process to appear ...
	I0908 17:46:34.714513   52672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:46:34.741180   52672 api_server.go:72] duration metric: took 325.637518ms to wait for apiserver process to appear ...
	I0908 17:46:34.741214   52672 api_server.go:88] waiting for apiserver healthz status ...
	I0908 17:46:34.741268   52672 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0908 17:46:34.748497   52672 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0908 17:46:34.750002   52672 api_server.go:141] control plane version: v1.34.0
	I0908 17:46:34.750024   52672 api_server.go:131] duration metric: took 8.801955ms to wait for apiserver health ...
	I0908 17:46:34.750036   52672 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 17:46:34.753815   52672 system_pods.go:59] 6 kube-system pods found
	I0908 17:46:34.753847   52672 system_pods.go:61] "coredns-66bc5c9577-c2tlk" [6737e7ba-9abe-4fb0-92b9-28b32bb89ce8] Running
	I0908 17:46:34.753856   52672 system_pods.go:61] "etcd-pause-582402" [0079a3c1-312e-4117-8543-ba500931a7ba] Running
	I0908 17:46:34.753867   52672 system_pods.go:61] "kube-apiserver-pause-582402" [2258819d-bf5c-41e4-9357-2bc8d7ee5bc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 17:46:34.753877   52672 system_pods.go:61] "kube-controller-manager-pause-582402" [86bc4537-33f8-47ff-9941-c6fee4f6560f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 17:46:34.753888   52672 system_pods.go:61] "kube-proxy-9ld9z" [ff614778-696c-4113-9693-970eea6f5d45] Running
	I0908 17:46:34.753896   52672 system_pods.go:61] "kube-scheduler-pause-582402" [30c4c4e9-a2c8-4489-8d3f-b804341119d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 17:46:34.753905   52672 system_pods.go:74] duration metric: took 3.862953ms to wait for pod list to return data ...
	I0908 17:46:34.753917   52672 default_sa.go:34] waiting for default service account to be created ...
	I0908 17:46:34.756474   52672 default_sa.go:45] found service account: "default"
	I0908 17:46:34.756495   52672 default_sa.go:55] duration metric: took 2.571785ms for default service account to be created ...
	I0908 17:46:34.756505   52672 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 17:46:34.764959   52672 system_pods.go:86] 6 kube-system pods found
	I0908 17:46:34.764992   52672 system_pods.go:89] "coredns-66bc5c9577-c2tlk" [6737e7ba-9abe-4fb0-92b9-28b32bb89ce8] Running
	I0908 17:46:34.765000   52672 system_pods.go:89] "etcd-pause-582402" [0079a3c1-312e-4117-8543-ba500931a7ba] Running
	I0908 17:46:34.765011   52672 system_pods.go:89] "kube-apiserver-pause-582402" [2258819d-bf5c-41e4-9357-2bc8d7ee5bc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 17:46:34.765024   52672 system_pods.go:89] "kube-controller-manager-pause-582402" [86bc4537-33f8-47ff-9941-c6fee4f6560f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 17:46:34.765032   52672 system_pods.go:89] "kube-proxy-9ld9z" [ff614778-696c-4113-9693-970eea6f5d45] Running
	I0908 17:46:34.765040   52672 system_pods.go:89] "kube-scheduler-pause-582402" [30c4c4e9-a2c8-4489-8d3f-b804341119d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 17:46:34.765051   52672 system_pods.go:126] duration metric: took 8.537852ms to wait for k8s-apps to be running ...
	I0908 17:46:34.765064   52672 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 17:46:34.765116   52672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 17:46:34.786190   52672 system_svc.go:56] duration metric: took 21.115435ms WaitForService to wait for kubelet
	I0908 17:46:34.786234   52672 kubeadm.go:578] duration metric: took 370.704648ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 17:46:34.786254   52672 node_conditions.go:102] verifying NodePressure condition ...
	I0908 17:46:34.789187   52672 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0908 17:46:34.789211   52672 node_conditions.go:123] node cpu capacity is 2
	I0908 17:46:34.789225   52672 node_conditions.go:105] duration metric: took 2.965946ms to run NodePressure ...
	I0908 17:46:34.789238   52672 start.go:241] waiting for startup goroutines ...
	I0908 17:46:34.789246   52672 start.go:246] waiting for cluster config update ...
	I0908 17:46:34.789254   52672 start.go:255] writing updated cluster config ...
	I0908 17:46:34.789546   52672 ssh_runner.go:195] Run: rm -f paused
	I0908 17:46:34.796754   52672 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 17:46:34.797847   52672 kapi.go:59] client config for pause-582402: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/profiles/pause-582402/client.crt", KeyFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/profiles/pause-582402/client.key", CAFile:"/home/jenkins/minikube-integration/21504-7629/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0908 17:46:34.801843   52672 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c2tlk" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:34.807696   52672 pod_ready.go:94] pod "coredns-66bc5c9577-c2tlk" is "Ready"
	I0908 17:46:34.807727   52672 pod_ready.go:86] duration metric: took 5.848212ms for pod "coredns-66bc5c9577-c2tlk" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:34.810773   52672 pod_ready.go:83] waiting for pod "etcd-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:34.816627   52672 pod_ready.go:94] pod "etcd-pause-582402" is "Ready"
	I0908 17:46:34.816652   52672 pod_ready.go:86] duration metric: took 5.855848ms for pod "etcd-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:34.819999   52672 pod_ready.go:83] waiting for pod "kube-apiserver-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 17:46:35.210486   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	W0908 17:46:37.211215   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	I0908 17:46:35.910296   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:35.910955   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:35.910995   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:35.910935   54867 retry.go:31] will retry after 2.234938879s: waiting for domain to come up
	I0908 17:46:38.148158   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:38.148809   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:38.148858   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:38.148785   54867 retry.go:31] will retry after 2.887047844s: waiting for domain to come up
	W0908 17:46:35.941846   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:37.942331   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:40.439762   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:36.830288   52672 pod_ready.go:104] pod "kube-apiserver-pause-582402" is not "Ready", error: <nil>
	W0908 17:46:39.326394   52672 pod_ready.go:104] pod "kube-apiserver-pause-582402" is not "Ready", error: <nil>
	W0908 17:46:41.326905   52672 pod_ready.go:104] pod "kube-apiserver-pause-582402" is not "Ready", error: <nil>
	W0908 17:46:39.709672   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	W0908 17:46:42.209034   52497 node_ready.go:57] node "flannel-387181" has "Ready":"False" status (will retry)
	I0908 17:46:43.831976   52672 pod_ready.go:94] pod "kube-apiserver-pause-582402" is "Ready"
	I0908 17:46:43.832002   52672 pod_ready.go:86] duration metric: took 9.011976618s for pod "kube-apiserver-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:43.834242   52672 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:43.839567   52672 pod_ready.go:94] pod "kube-controller-manager-pause-582402" is "Ready"
	I0908 17:46:43.839593   52672 pod_ready.go:86] duration metric: took 5.326245ms for pod "kube-controller-manager-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:43.841513   52672 pod_ready.go:83] waiting for pod "kube-proxy-9ld9z" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:43.845460   52672 pod_ready.go:94] pod "kube-proxy-9ld9z" is "Ready"
	I0908 17:46:43.845488   52672 pod_ready.go:86] duration metric: took 3.956239ms for pod "kube-proxy-9ld9z" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:43.847410   52672 pod_ready.go:83] waiting for pod "kube-scheduler-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:44.424735   52672 pod_ready.go:94] pod "kube-scheduler-pause-582402" is "Ready"
	I0908 17:46:44.424762   52672 pod_ready.go:86] duration metric: took 577.331373ms for pod "kube-scheduler-pause-582402" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 17:46:44.424772   52672 pod_ready.go:40] duration metric: took 9.62797322s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 17:46:44.474082   52672 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 17:46:44.475728   52672 out.go:179] * Done! kubectl is now configured to use "pause-582402" cluster and "default" namespace by default
	I0908 17:46:41.037362   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:41.037918   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:41.037953   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:41.037878   54867 retry.go:31] will retry after 4.131531342s: waiting for domain to come up
	I0908 17:46:45.173765   54844 main.go:141] libmachine: (bridge-387181) DBG | domain bridge-387181 has defined MAC address 52:54:00:1f:8e:45 in network mk-bridge-387181
	I0908 17:46:45.174271   54844 main.go:141] libmachine: (bridge-387181) DBG | unable to find current IP address of domain bridge-387181 in network mk-bridge-387181
	I0908 17:46:45.174301   54844 main.go:141] libmachine: (bridge-387181) DBG | I0908 17:46:45.174231   54867 retry.go:31] will retry after 5.250845666s: waiting for domain to come up
	W0908 17:46:42.439800   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	W0908 17:46:44.440592   52596 pod_ready.go:104] pod "coredns-66bc5c9577-7s85r" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.271236544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757353607271215280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c02d66a-73a4-4553-ad50-ea80c90b3548 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.272932594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1bddbf95-e48c-4b72-a4b8-89da7c3907ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.272996706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1bddbf95-e48c-4b72-a4b8-89da7c3907ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.273272386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16e436b9bd7b21a9b36bd130cd8ea344cda06a19dda917a779a8163811c5366e,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757353593718394484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccda1b726e4d850914c07874d01efb6cd70d3958c4ff375fe9129434c7a904bf,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757353589963125191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194e1ca66696d558ac6b8630dab91ff0537adf6ad4e3c214089e239178fdf36b,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757353589925068783,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernet
es.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bf55517747fb3d1d169b67e626f80cf6961f1bd4cf8064c4e0f3caa6b4d55a,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757353589907027314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee46f499922fdb4631f75c5ceaed4cf569db29c8ef0ecb8253beb03d8eee294d,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757353576688092036,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327b7b6576be426c43e05ceebed39315dfe9c5369defa56c1aa708bf42535ac1,PodSandboxId:e48594ec30229fded335944f0cbf97b418ab1d38357d08776c4693d53741b34e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175735
3566748815783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f
90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757353565574491656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5
130b38a54e8,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757353565537943192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757353565514013473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757353565326963783,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757353565356091406,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567,PodSandboxId:88e98ad5b6363624d749ab496855580275c3720f64e800917730279dd62d6e51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757353449716393553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1bddbf95-e48c-4b72-a4b8-89da7c3907ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.328251595Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=931fffb0-e835-433a-bbf3-3e23ff4aa749 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.328373901Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=931fffb0-e835-433a-bbf3-3e23ff4aa749 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.331650024Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82d34ec1-03c7-4a27-8fc9-35017bba7826 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.332450137Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757353607332424549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82d34ec1-03c7-4a27-8fc9-35017bba7826 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.333401968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46aff035-d586-43a4-af13-b26158641159 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.333650854Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46aff035-d586-43a4-af13-b26158641159 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.334140141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16e436b9bd7b21a9b36bd130cd8ea344cda06a19dda917a779a8163811c5366e,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757353593718394484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccda1b726e4d850914c07874d01efb6cd70d3958c4ff375fe9129434c7a904bf,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757353589963125191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194e1ca66696d558ac6b8630dab91ff0537adf6ad4e3c214089e239178fdf36b,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757353589925068783,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernet
es.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bf55517747fb3d1d169b67e626f80cf6961f1bd4cf8064c4e0f3caa6b4d55a,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757353589907027314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee46f499922fdb4631f75c5ceaed4cf569db29c8ef0ecb8253beb03d8eee294d,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757353576688092036,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327b7b6576be426c43e05ceebed39315dfe9c5369defa56c1aa708bf42535ac1,PodSandboxId:e48594ec30229fded335944f0cbf97b418ab1d38357d08776c4693d53741b34e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175735
3566748815783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f
90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757353565574491656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5
130b38a54e8,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757353565537943192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757353565514013473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757353565326963783,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757353565356091406,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567,PodSandboxId:88e98ad5b6363624d749ab496855580275c3720f64e800917730279dd62d6e51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757353449716393553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46aff035-d586-43a4-af13-b26158641159 name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.393701467Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87b376c7-9d21-4d71-938c-35ba33a2b5a6 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.394078358Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87b376c7-9d21-4d71-938c-35ba33a2b5a6 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.395497981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11744b91-8b5f-4a8f-bf5e-b91b1c6217df name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.396375988Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757353607396345501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11744b91-8b5f-4a8f-bf5e-b91b1c6217df name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.397076434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abc3618d-5684-464a-aa51-a7d237f970bc name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.397205574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abc3618d-5684-464a-aa51-a7d237f970bc name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.397543631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16e436b9bd7b21a9b36bd130cd8ea344cda06a19dda917a779a8163811c5366e,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757353593718394484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccda1b726e4d850914c07874d01efb6cd70d3958c4ff375fe9129434c7a904bf,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757353589963125191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194e1ca66696d558ac6b8630dab91ff0537adf6ad4e3c214089e239178fdf36b,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757353589925068783,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernet
es.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bf55517747fb3d1d169b67e626f80cf6961f1bd4cf8064c4e0f3caa6b4d55a,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757353589907027314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee46f499922fdb4631f75c5ceaed4cf569db29c8ef0ecb8253beb03d8eee294d,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757353576688092036,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327b7b6576be426c43e05ceebed39315dfe9c5369defa56c1aa708bf42535ac1,PodSandboxId:e48594ec30229fded335944f0cbf97b418ab1d38357d08776c4693d53741b34e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175735
3566748815783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f
90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757353565574491656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5
130b38a54e8,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757353565537943192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757353565514013473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757353565326963783,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757353565356091406,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567,PodSandboxId:88e98ad5b6363624d749ab496855580275c3720f64e800917730279dd62d6e51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757353449716393553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abc3618d-5684-464a-aa51-a7d237f970bc name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.445005404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c054cc6f-1d43-4aa5-b000-dec002d37c09 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.445276793Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c054cc6f-1d43-4aa5-b000-dec002d37c09 name=/runtime.v1.RuntimeService/Version
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.446686395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2afa245c-a729-4e90-a5b0-c24d723ab17b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.447398275Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757353607447374395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2afa245c-a729-4e90-a5b0-c24d723ab17b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.447878029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f454e05-6596-4953-9d0c-7cffe73a982d name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.447923781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f454e05-6596-4953-9d0c-7cffe73a982d name=/runtime.v1.RuntimeService/ListContainers
	Sep 08 17:46:47 pause-582402 crio[2548]: time="2025-09-08 17:46:47.448145738Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16e436b9bd7b21a9b36bd130cd8ea344cda06a19dda917a779a8163811c5366e,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757353593718394484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccda1b726e4d850914c07874d01efb6cd70d3958c4ff375fe9129434c7a904bf,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757353589963125191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194e1ca66696d558ac6b8630dab91ff0537adf6ad4e3c214089e239178fdf36b,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757353589925068783,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernet
es.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7bf55517747fb3d1d169b67e626f80cf6961f1bd4cf8064c4e0f3caa6b4d55a,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757353589907027314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee46f499922fdb4631f75c5ceaed4cf569db29c8ef0ecb8253beb03d8eee294d,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757353576688092036,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327b7b6576be426c43e05ceebed39315dfe9c5369defa56c1aa708bf42535ac1,PodSandboxId:e48594ec30229fded335944f0cbf97b418ab1d38357d08776c4693d53741b34e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175735
3566748815783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd,PodSandboxId:20fb8bd0d62b9ea9d6b511beb3e7aae1889f03082f32d7f
90e727886a5c15182,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1757353565574491656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebd9f496e2f16bc85d652ca0a0e855c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5
130b38a54e8,PodSandboxId:9e78c9867194fef608292b9be0160c363c684d19308778ebd9aefddd4914a3b5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1757353565537943192,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 810bbb01bb8097c8a02986628db94034,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76,PodSandboxId:de2d9fa8364e75548347e849b4a0be5871185b8d1ae98f2ff0e7dd158dbffa92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1757353565514013473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bccdd3e2ca6f7b5319e56b44ded41059,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14,PodSandboxId:f04aac971a45b5f5fcbb6073c3a582066612ce2770c92e26264f4696012478dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1757353565326963783,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9ld9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff614778-696c-4113-9693-970eea6f5d45,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251,PodSandboxId:ed08660f376834348b81c75e357500e22c67b11d86aca775440bd5f306b57ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1757353565356091406,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-582402,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e5cd2e2419bb998c6a883bb24a0a14,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567,PodSandboxId:88e98ad5b6363624d749ab496855580275c3720f64e800917730279dd62d6e51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1757353449716393553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-c2tlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6737e7ba-9abe-4fb0-92b9-28b32bb89ce8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f454e05-6596-4953-9d0c-7cffe73a982d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	16e436b9bd7b2       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   13 seconds ago      Running             kube-proxy                2                   f04aac971a45b       kube-proxy-9ld9z
	ccda1b726e4d8       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   17 seconds ago      Running             kube-apiserver            2                   de2d9fa8364e7       kube-apiserver-pause-582402
	194e1ca66696d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   17 seconds ago      Running             kube-controller-manager   2                   ed08660f37683       kube-controller-manager-pause-582402
	f7bf55517747f       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   17 seconds ago      Running             kube-scheduler            2                   9e78c9867194f       kube-scheduler-pause-582402
	ee46f499922fd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   30 seconds ago      Running             etcd                      2                   20fb8bd0d62b9       etcd-pause-582402
	327b7b6576be4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   40 seconds ago      Running             coredns                   1                   e48594ec30229       coredns-66bc5c9577-c2tlk
	c2b9ba65561f5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   41 seconds ago      Exited              etcd                      1                   20fb8bd0d62b9       etcd-pause-582402
	38b1c91b613d1       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   42 seconds ago      Exited              kube-scheduler            1                   9e78c9867194f       kube-scheduler-pause-582402
	57c683e60f5c3       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   42 seconds ago      Exited              kube-apiserver            1                   de2d9fa8364e7       kube-apiserver-pause-582402
	c589b89a7aa0d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   42 seconds ago      Exited              kube-controller-manager   1                   ed08660f37683       kube-controller-manager-pause-582402
	3e24256763b07       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   42 seconds ago      Exited              kube-proxy                1                   f04aac971a45b       kube-proxy-9ld9z
	6d1f0f35bb8d6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   2 minutes ago       Exited              coredns                   0                   88e98ad5b6363       coredns-66bc5c9577-c2tlk
	
	
	==> coredns [327b7b6576be426c43e05ceebed39315dfe9c5369defa56c1aa708bf42535ac1] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56460 - 6091 "HINFO IN 8912549566450987198.2962805177611349679. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033537603s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6d1f0f35bb8d62f8d71887b07b8359245a4b346935783ed9ca3ca64e7080e567] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39250 - 2984 "HINFO IN 8293307131784017457.3055633146308688108. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036101816s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-582402
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-582402
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6
	                    minikube.k8s.io/name=pause-582402
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T17_44_04_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 17:44:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-582402
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 17:46:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 17:46:32 +0000   Mon, 08 Sep 2025 17:43:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 17:46:32 +0000   Mon, 08 Sep 2025 17:43:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 17:46:32 +0000   Mon, 08 Sep 2025 17:43:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 17:46:32 +0000   Mon, 08 Sep 2025 17:44:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    pause-582402
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 56da0bd8121b4274a96a44431f816186
	  System UUID:                56da0bd8-121b-4274-a96a-44431f816186
	  Boot ID:                    feff45fc-33e8-489e-a216-bac44daf0199
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-c2tlk                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m39s
	  kube-system                 etcd-pause-582402                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m44s
	  kube-system                 kube-apiserver-pause-582402             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 kube-controller-manager-pause-582402    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 kube-proxy-9ld9z                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 kube-scheduler-pause-582402             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m37s                  kube-proxy       
	  Normal  Starting                 13s                    kube-proxy       
	  Normal  Starting                 2m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m50s (x8 over 2m50s)  kubelet          Node pause-582402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m50s (x8 over 2m50s)  kubelet          Node pause-582402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m50s (x7 over 2m50s)  kubelet          Node pause-582402 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m44s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m43s                  kubelet          Node pause-582402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m43s                  kubelet          Node pause-582402 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m43s                  kubelet          Node pause-582402 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m43s                  kubelet          Node pause-582402 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           2m40s                  node-controller  Node pause-582402 event: Registered Node pause-582402 in Controller
	  Normal  Starting                 18s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x8 over 18s)      kubelet          Node pause-582402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x8 over 18s)      kubelet          Node pause-582402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x7 over 18s)      kubelet          Node pause-582402 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                    node-controller  Node pause-582402 event: Registered Node pause-582402 in Controller
	
	
	==> dmesg <==
	[Sep 8 17:43] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000945] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004676] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.212135] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.096841] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.097653] kauditd_printk_skb: 74 callbacks suppressed
	[Sep 8 17:44] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.883355] kauditd_printk_skb: 19 callbacks suppressed
	[ +35.401097] kauditd_printk_skb: 183 callbacks suppressed
	[Sep 8 17:46] kauditd_printk_skb: 34 callbacks suppressed
	[ +11.085947] kauditd_printk_skb: 254 callbacks suppressed
	[  +0.138335] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.031852] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [c2b9ba65561f5d22117bbe3db4cb207053929514c35f1ee4bda4d654e86a7ebd] <==
	{"level":"warn","ts":"2025-09-08T17:46:06.385947Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"warn","ts":"2025-09-08T17:46:06.388810Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	{"level":"info","ts":"2025-09-08T17:46:06.390635Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.196:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.196:2380","--initial-cluster=pause-582402=https://192.168.39.196:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.196:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.196:2380","--name=pause-582402","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s
"]}
	{"level":"info","ts":"2025-09-08T17:46:06.390801Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2025-09-08T17:46:06.390832Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-09-08T17:46:06.390856Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.196:2380"]}
	{"level":"info","ts":"2025-09-08T17:46:06.390885Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-08T17:46:06.391498Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.196:2379"]}
	{"level":"info","ts":"2025-09-08T17:46:06.405643Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.4","git-sha":"5400cdc","go-version":"go1.23.11","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-582402","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.196:2380"],"listen-peer-urls":["https://192.168.39.196:2380"],"advertise-client-urls":["https://192.168.39.196:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.196:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-clu
ster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	{"level":"info","ts":"2025-09-08T17:46:06.406352Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc00011e7c8}"}
	{"level":"info","ts":"2025-09-08T17:46:06.458996Z","logger":"bbolt","caller":"bbolt@v1.4.2/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	{"level":"info","ts":"2025-09-08T17:46:06.463070Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"56.805996ms"}
	{"level":"info","ts":"2025-09-08T17:46:06.463162Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":487}
	{"level":"info","ts":"2025-09-08T17:46:06.508895Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-09-08T17:46:06.515128Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":901120,"backend-size":"901 kB","backend-size-in-use-bytes":884736,"backend-size-in-use":"885 kB"}
	{"level":"info","ts":"2025-09-08T17:46:06.518410Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	{"level":"info","ts":"2025-09-08T17:46:06.549170Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","commit-index":487}
	{"level":"info","ts":"2025-09-08T17:46:06.561671Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	{"level":"info","ts":"2025-09-08T17:46:06.564066Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	{"level":"info","ts":"2025-09-08T17:46:06.568313Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:a14f9258d3b66c75 RaftAttributes:{PeerURLs:[https://192.168.39.196:2380] IsLearner:false} Attributes:{Name:pause-582402 ClientURLs:[https://192.168.39.196:2379]}}"}
	
	
	==> etcd [ee46f499922fdb4631f75c5ceaed4cf569db29c8ef0ecb8253beb03d8eee294d] <==
	{"level":"warn","ts":"2025-09-08T17:46:31.312843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.340450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.350318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.363070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.377221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.404790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.430523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.439624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.453277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.474693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.476391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.491045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.502416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.525622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.536306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.552786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.561529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.576453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.585133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.619942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.648637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.659715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.668541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.683730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T17:46:31.744182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:46:47 up 3 min,  0 users,  load average: 0.59, 0.36, 0.15
	Linux pause-582402 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [57c683e60f5c3c4803ce01d037b426e03a2a2be90bf34b1a94b3a2cd4d9c5e76] <==
	{"level":"warn","ts":"2025-09-08T17:46:26.056422Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":78,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.080143Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":79,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.106685Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":80,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.129527Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":81,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.154023Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":82,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.180291Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":83,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.207063Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":84,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.232184Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":85,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.257770Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":86,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.282733Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":87,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.307312Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":88,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.331377Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":89,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.354811Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":90,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.378089Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":91,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.402147Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":92,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.428433Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":93,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.456121Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":94,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.479804Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":95,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.507164Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":96,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.533733Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":97,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.559711Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":98,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-08T17:46:26.585422Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000d3cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":99,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	E0908 17:46:26.585508       1 controller.go:97] Error removing old endpoints from kubernetes service: rpc error: code = Canceled desc = grpc: the client connection is closing
	E0908 17:46:26.680400       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0908 17:46:26.681032       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-apiserver [ccda1b726e4d850914c07874d01efb6cd70d3958c4ff375fe9129434c7a904bf] <==
	I0908 17:46:32.555687       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0908 17:46:32.556178       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0908 17:46:32.563398       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0908 17:46:32.563805       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0908 17:46:32.563966       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0908 17:46:32.563997       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0908 17:46:32.564013       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0908 17:46:32.564163       1 aggregator.go:171] initial CRD sync complete...
	I0908 17:46:32.564174       1 autoregister_controller.go:144] Starting autoregister controller
	I0908 17:46:32.564182       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0908 17:46:32.564189       1 cache.go:39] Caches are synced for autoregister controller
	I0908 17:46:32.564422       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0908 17:46:32.564464       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0908 17:46:32.574775       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0908 17:46:32.574859       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I0908 17:46:32.574949       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0908 17:46:33.362295       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0908 17:46:33.456300       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W0908 17:46:33.794207       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.196]
	I0908 17:46:33.797274       1 controller.go:667] quota admission added evaluator for: endpoints
	I0908 17:46:33.815135       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 17:46:34.251601       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0908 17:46:34.320344       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0908 17:46:34.356317       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0908 17:46:34.366434       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [194e1ca66696d558ac6b8630dab91ff0537adf6ad4e3c214089e239178fdf36b] <==
	I0908 17:46:35.875004       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 17:46:35.880057       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 17:46:35.882327       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 17:46:35.886667       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 17:46:35.889115       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0908 17:46:35.889316       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 17:46:35.889422       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-582402"
	I0908 17:46:35.889462       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0908 17:46:35.891647       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 17:46:35.891997       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0908 17:46:35.893317       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 17:46:35.893412       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 17:46:35.893422       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 17:46:35.893430       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 17:46:35.893731       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 17:46:35.894032       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 17:46:35.894160       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0908 17:46:35.894169       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 17:46:35.894180       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 17:46:35.908451       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 17:46:35.908496       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 17:46:35.908508       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 17:46:35.916788       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 17:46:35.916804       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 17:46:35.926679       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [c589b89a7aa0d3b5a7f997dd3c56b8b94e86524e5bf723cc43b0e7a8ab505251] <==
	I0908 17:46:07.393897       1 serving.go:386] Generated self-signed cert in-memory
	I0908 17:46:08.076342       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0908 17:46:08.076398       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 17:46:08.079765       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0908 17:46:08.079909       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0908 17:46:08.080077       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0908 17:46:08.080695       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [16e436b9bd7b21a9b36bd130cd8ea344cda06a19dda917a779a8163811c5366e] <==
	I0908 17:46:33.920187       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 17:46:34.021719       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 17:46:34.021766       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.196"]
	E0908 17:46:34.021834       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 17:46:34.099077       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 17:46:34.099146       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 17:46:34.099169       1 server_linux.go:132] "Using iptables Proxier"
	I0908 17:46:34.119772       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 17:46:34.120134       1 server.go:527] "Version info" version="v1.34.0"
	I0908 17:46:34.121052       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 17:46:34.125984       1 config.go:200] "Starting service config controller"
	I0908 17:46:34.126045       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 17:46:34.126075       1 config.go:106] "Starting endpoint slice config controller"
	I0908 17:46:34.126090       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 17:46:34.126111       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 17:46:34.126125       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 17:46:34.126779       1 config.go:309] "Starting node config controller"
	I0908 17:46:34.128941       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 17:46:34.129234       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 17:46:34.226894       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 17:46:34.227004       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 17:46:34.227430       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14] <==
	I0908 17:46:24.081418       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 17:46:24.081508       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 17:46:24.081551       1 server_linux.go:132] "Using iptables Proxier"
	I0908 17:46:24.096420       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 17:46:24.097241       1 server.go:527] "Version info" version="v1.34.0"
	I0908 17:46:24.097294       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 17:46:24.105153       1 config.go:200] "Starting service config controller"
	I0908 17:46:24.106658       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 17:46:24.105553       1 config.go:106] "Starting endpoint slice config controller"
	I0908 17:46:24.106707       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 17:46:24.105623       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 17:46:24.106717       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	E0908 17:46:24.106272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.196:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E0908 17:46:24.106335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.196:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E0908 17:46:24.106386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.196:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0908 17:46:24.106449       1 config.go:309] "Starting node config controller"
	I0908 17:46:24.106735       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 17:46:24.106739       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E0908 17:46:24.107082       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.196:8443: connect: connection refused"
	E0908 17:46:25.234522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.196:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E0908 17:46:25.268730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.196:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 17:46:25.405050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.196:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	
	
	==> kube-scheduler [38b1c91b613d1aaf9bb6deded9721254f528b8ac9e760994795f5130b38a54e8] <==
	I0908 17:46:07.843863       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [f7bf55517747fb3d1d169b67e626f80cf6961f1bd4cf8064c4e0f3caa6b4d55a] <==
	I0908 17:46:31.667773       1 serving.go:386] Generated self-signed cert in-memory
	W0908 17:46:32.450033       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 17:46:32.452709       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 17:46:32.452780       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 17:46:32.452801       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 17:46:32.503183       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 17:46:32.503222       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 17:46:32.512069       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 17:46:32.512417       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 17:46:32.514645       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 17:46:32.514778       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0908 17:46:32.523128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I0908 17:46:32.615032       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.613515    3664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: E0908 17:46:32.629342    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-582402\" already exists" pod="kube-system/kube-scheduler-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: E0908 17:46:32.629912    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-582402\" already exists" pod="kube-system/kube-apiserver-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.630363    3664 kubelet_node_status.go:124] "Node was previously registered" node="pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.630552    3664 kubelet_node_status.go:78] "Successfully registered node" node="pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.630670    3664 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.631987    3664 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: E0908 17:46:32.648051    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-582402\" already exists" pod="kube-system/etcd-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.648075    3664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: E0908 17:46:32.671285    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-582402\" already exists" pod="kube-system/kube-apiserver-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.671512    3664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: E0908 17:46:32.688118    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-582402\" already exists" pod="kube-system/kube-controller-manager-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: I0908 17:46:32.688253    3664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-582402"
	Sep 08 17:46:32 pause-582402 kubelet[3664]: E0908 17:46:32.702784    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-582402\" already exists" pod="kube-system/kube-scheduler-pause-582402"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: I0908 17:46:33.216069    3664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-582402"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: E0908 17:46:33.227121    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-582402\" already exists" pod="kube-system/etcd-pause-582402"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: I0908 17:46:33.400835    3664 apiserver.go:52] "Watching apiserver"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: I0908 17:46:33.425474    3664 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: I0908 17:46:33.449802    3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff614778-696c-4113-9693-970eea6f5d45-xtables-lock\") pod \"kube-proxy-9ld9z\" (UID: \"ff614778-696c-4113-9693-970eea6f5d45\") " pod="kube-system/kube-proxy-9ld9z"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: I0908 17:46:33.450170    3664 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff614778-696c-4113-9693-970eea6f5d45-lib-modules\") pod \"kube-proxy-9ld9z\" (UID: \"ff614778-696c-4113-9693-970eea6f5d45\") " pod="kube-system/kube-proxy-9ld9z"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: I0908 17:46:33.619082    3664 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-582402"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: E0908 17:46:33.633186    3664 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-582402\" already exists" pod="kube-system/etcd-pause-582402"
	Sep 08 17:46:33 pause-582402 kubelet[3664]: I0908 17:46:33.705491    3664 scope.go:117] "RemoveContainer" containerID="3e24256763b07d205804fa5c79c8b9e867032c5fc7025460c97e0c4173d6da14"
	Sep 08 17:46:39 pause-582402 kubelet[3664]: E0908 17:46:39.588753    3664 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757353599587969576  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 08 17:46:39 pause-582402 kubelet[3664]: E0908 17:46:39.588783    3664 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757353599587969576  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-582402 -n pause-582402
helpers_test.go:269: (dbg) Run:  kubectl --context pause-582402 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (121.98s)

                                                
                                    

Test pass (281/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 27.44
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.13
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 15.2
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.15
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.62
22 TestOffline 65.22
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 209.32
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 10.55
35 TestAddons/parallel/Registry 19.64
36 TestAddons/parallel/RegistryCreds 1.23
38 TestAddons/parallel/InspektorGadget 6.33
39 TestAddons/parallel/MetricsServer 6.84
41 TestAddons/parallel/CSI 57.58
42 TestAddons/parallel/Headlamp 22.98
43 TestAddons/parallel/CloudSpanner 6.25
44 TestAddons/parallel/LocalPath 16.2
45 TestAddons/parallel/NvidiaDevicePlugin 7
46 TestAddons/parallel/Yakd 12.35
48 TestAddons/StoppedEnableDisable 91.27
49 TestCertOptions 69.03
50 TestCertExpiration 305.76
52 TestForceSystemdFlag 73.6
53 TestForceSystemdEnv 74.48
55 TestKVMDriverInstallOrUpdate 8.25
59 TestErrorSpam/setup 45.81
60 TestErrorSpam/start 0.35
61 TestErrorSpam/status 0.81
62 TestErrorSpam/pause 1.78
63 TestErrorSpam/unpause 2.01
64 TestErrorSpam/stop 5.41
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 87.35
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 33.68
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.49
76 TestFunctional/serial/CacheCmd/cache/add_local 2.32
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 42.6
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.48
87 TestFunctional/serial/LogsFileCmd 1.49
88 TestFunctional/serial/InvalidService 3.99
90 TestFunctional/parallel/ConfigCmd 0.31
91 TestFunctional/parallel/DashboardCmd 20.99
92 TestFunctional/parallel/DryRun 0.28
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 0.88
98 TestFunctional/parallel/ServiceCmdConnect 20.51
99 TestFunctional/parallel/AddonsCmd 0.12
100 TestFunctional/parallel/PersistentVolumeClaim 46.6
102 TestFunctional/parallel/SSHCmd 0.43
103 TestFunctional/parallel/CpCmd 1.32
104 TestFunctional/parallel/MySQL 23.7
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.31
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
114 TestFunctional/parallel/License 0.45
124 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
128 TestFunctional/parallel/ImageCommands/ImageBuild 6.43
129 TestFunctional/parallel/ImageCommands/Setup 1.96
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.48
135 TestFunctional/parallel/ProfileCmd/profile_list 0.35
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.03
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.01
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.02
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.98
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.71
143 TestFunctional/parallel/ServiceCmd/DeployApp 14.15
144 TestFunctional/parallel/MountCmd/any-port 10.88
145 TestFunctional/parallel/Version/short 0.05
146 TestFunctional/parallel/Version/components 0.47
147 TestFunctional/parallel/ServiceCmd/List 1.25
148 TestFunctional/parallel/ServiceCmd/JSONOutput 1.27
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
150 TestFunctional/parallel/ServiceCmd/Format 0.4
151 TestFunctional/parallel/ServiceCmd/URL 0.37
152 TestFunctional/parallel/MountCmd/specific-port 1.75
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.68
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 245.02
162 TestMultiControlPlane/serial/DeployApp 7.56
163 TestMultiControlPlane/serial/PingHostFromPods 1.21
164 TestMultiControlPlane/serial/AddWorkerNode 58.25
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.95
167 TestMultiControlPlane/serial/CopyFile 13.64
168 TestMultiControlPlane/serial/StopSecondaryNode 91.72
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
170 TestMultiControlPlane/serial/RestartSecondaryNode 38.27
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.18
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 410.59
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.67
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
175 TestMultiControlPlane/serial/StopCluster 272.56
176 TestMultiControlPlane/serial/RestartCluster 126.75
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
178 TestMultiControlPlane/serial/AddSecondaryNode 86.15
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
183 TestJSONOutput/start/Command 83.56
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.83
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.73
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.37
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 95.97
215 TestMountStart/serial/StartWithMountFirst 29.36
216 TestMountStart/serial/VerifyMountFirst 0.4
217 TestMountStart/serial/StartWithMountSecond 30.3
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.9
220 TestMountStart/serial/VerifyMountPostDelete 0.41
221 TestMountStart/serial/Stop 1.46
222 TestMountStart/serial/RestartStopped 26.07
223 TestMountStart/serial/VerifyMountPostStop 0.38
226 TestMultiNode/serial/FreshStart2Nodes 115.07
227 TestMultiNode/serial/DeployApp2Nodes 5.48
228 TestMultiNode/serial/PingHostFrom2Pods 0.81
229 TestMultiNode/serial/AddNode 53.5
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.6
232 TestMultiNode/serial/CopyFile 7.38
233 TestMultiNode/serial/StopNode 3.18
234 TestMultiNode/serial/StartAfterStop 40.77
235 TestMultiNode/serial/RestartKeepsNodes 354.43
236 TestMultiNode/serial/DeleteNode 2.83
237 TestMultiNode/serial/StopMultiNode 182.1
238 TestMultiNode/serial/RestartMultiNode 95.98
239 TestMultiNode/serial/ValidateNameConflict 48.33
246 TestScheduledStopUnix 119.08
250 TestRunningBinaryUpgrade 184.43
252 TestKubernetesUpgrade 201.82
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 99.82
264 TestNetworkPlugins/group/false 3.16
268 TestNoKubernetes/serial/StartWithStopK8s 64.1
269 TestNoKubernetes/serial/Start 58.14
270 TestStoppedBinaryUpgrade/Setup 3.1
271 TestStoppedBinaryUpgrade/Upgrade 154.94
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
273 TestNoKubernetes/serial/ProfileList 1.1
274 TestNoKubernetes/serial/Stop 1.49
275 TestNoKubernetes/serial/StartNoArgs 70.52
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
278 TestPause/serial/Start 102.87
279 TestStoppedBinaryUpgrade/MinikubeLogs 1.33
287 TestNetworkPlugins/group/auto/Start 90.48
288 TestNetworkPlugins/group/flannel/Start 208.43
289 TestNetworkPlugins/group/enable-default-cni/Start 140.27
291 TestNetworkPlugins/group/auto/KubeletFlags 0.33
292 TestNetworkPlugins/group/auto/NetCatPod 13.08
293 TestNetworkPlugins/group/auto/DNS 0.23
294 TestNetworkPlugins/group/auto/Localhost 0.13
295 TestNetworkPlugins/group/auto/HairPin 0.15
296 TestNetworkPlugins/group/bridge/Start 88.69
297 TestNetworkPlugins/group/calico/Start 84.31
298 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
299 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.31
300 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
301 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
302 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
303 TestNetworkPlugins/group/kindnet/Start 70.4
304 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
305 TestNetworkPlugins/group/bridge/NetCatPod 11.27
306 TestNetworkPlugins/group/flannel/ControllerPod 6.01
307 TestNetworkPlugins/group/bridge/DNS 0.16
308 TestNetworkPlugins/group/bridge/Localhost 0.17
309 TestNetworkPlugins/group/bridge/HairPin 0.15
310 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
311 TestNetworkPlugins/group/flannel/NetCatPod 11.26
312 TestNetworkPlugins/group/calico/ControllerPod 6.01
313 TestNetworkPlugins/group/flannel/DNS 0.21
314 TestNetworkPlugins/group/flannel/Localhost 0.19
315 TestNetworkPlugins/group/calico/KubeletFlags 0.29
316 TestNetworkPlugins/group/flannel/HairPin 0.17
317 TestNetworkPlugins/group/calico/NetCatPod 10.43
318 TestNetworkPlugins/group/custom-flannel/Start 81.52
319 TestNetworkPlugins/group/calico/DNS 0.2
320 TestNetworkPlugins/group/calico/Localhost 0.17
321 TestNetworkPlugins/group/calico/HairPin 0.15
323 TestStartStop/group/old-k8s-version/serial/FirstStart 105.75
324 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
325 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
326 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
328 TestStartStop/group/no-preload/serial/FirstStart 115.55
329 TestNetworkPlugins/group/kindnet/DNS 0.19
330 TestNetworkPlugins/group/kindnet/Localhost 0.16
331 TestNetworkPlugins/group/kindnet/HairPin 0.16
333 TestStartStop/group/embed-certs/serial/FirstStart 122.91
334 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
335 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.29
336 TestNetworkPlugins/group/custom-flannel/DNS 0.19
337 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
338 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 66.02
341 TestStartStop/group/old-k8s-version/serial/DeployApp 12.4
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.26
343 TestStartStop/group/old-k8s-version/serial/Stop 91.09
344 TestStartStop/group/no-preload/serial/DeployApp 12.3
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.68
346 TestStartStop/group/no-preload/serial/Stop 91.07
347 TestStartStop/group/embed-certs/serial/DeployApp 11.29
348 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
349 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
350 TestStartStop/group/embed-certs/serial/Stop 91.73
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.41
353 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
354 TestStartStop/group/old-k8s-version/serial/SecondStart 50.12
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
356 TestStartStop/group/no-preload/serial/SecondStart 65.94
357 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.05
358 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
359 TestStartStop/group/embed-certs/serial/SecondStart 51.31
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 73.71
362 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
363 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
364 TestStartStop/group/old-k8s-version/serial/Pause 3.08
366 TestStartStop/group/newest-cni/serial/FirstStart 80.71
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
368 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.16
369 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
370 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
371 TestStartStop/group/no-preload/serial/Pause 4.15
372 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
373 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
374 TestStartStop/group/embed-certs/serial/Pause 2.96
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.01
379 TestStartStop/group/newest-cni/serial/DeployApp 0
380 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.97
381 TestStartStop/group/newest-cni/serial/Stop 10.56
382 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
383 TestStartStop/group/newest-cni/serial/SecondStart 39.16
384 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
387 TestStartStop/group/newest-cni/serial/Pause 4.14
x
+
TestDownloadOnly/v1.28.0/json-events (27.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-281275 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-281275 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (27.44112376s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (27.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 16:37:07.489875   11781 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0908 16:37:07.489973   11781 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-281275
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-281275: exit status 85 (60.52776ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-281275 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-281275 │ jenkins │ v1.36.0 │ 08 Sep 25 16:36 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 16:36:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 16:36:40.086750   11793 out.go:360] Setting OutFile to fd 1 ...
	I0908 16:36:40.086854   11793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:36:40.086865   11793 out.go:374] Setting ErrFile to fd 2...
	I0908 16:36:40.086871   11793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:36:40.087095   11793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
	W0908 16:36:40.087506   11793 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21504-7629/.minikube/config/config.json: open /home/jenkins/minikube-integration/21504-7629/.minikube/config/config.json: no such file or directory
	I0908 16:36:40.088389   11793 out.go:368] Setting JSON to true
	I0908 16:36:40.089514   11793 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1143,"bootTime":1757348257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 16:36:40.089602   11793 start.go:140] virtualization: kvm guest
	I0908 16:36:40.091821   11793 out.go:99] [download-only-281275] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0908 16:36:40.091949   11793 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 16:36:40.091969   11793 notify.go:220] Checking for updates...
	I0908 16:36:40.093153   11793 out.go:171] MINIKUBE_LOCATION=21504
	I0908 16:36:40.094685   11793 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 16:36:40.096198   11793 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 16:36:40.097818   11793 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 16:36:40.099004   11793 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 16:36:40.101755   11793 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 16:36:40.101957   11793 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 16:36:40.202689   11793 out.go:99] Using the kvm2 driver based on user configuration
	I0908 16:36:40.202725   11793 start.go:304] selected driver: kvm2
	I0908 16:36:40.202732   11793 start.go:918] validating driver "kvm2" against <nil>
	I0908 16:36:40.203198   11793 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 16:36:40.203347   11793 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21504-7629/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	W0908 16:36:40.207954   11793 install.go:62] docker-machine-driver-kvm2: exit status 1
	I0908 16:36:40.209730   11793 out.go:99] Downloading driver docker-machine-driver-kvm2:
	I0908 16:36:40.209834   11793 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:36:40.865823   11793 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 16:36:40.866433   11793 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0908 16:36:40.867102   11793 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 16:36:40.867134   11793 cni.go:84] Creating CNI manager for ""
	I0908 16:36:40.867173   11793 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 16:36:40.867182   11793 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 16:36:40.867242   11793 start.go:348] cluster config:
	{Name:download-only-281275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-281275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 16:36:40.867404   11793 iso.go:125] acquiring lock: {Name:mkaf49872b434993209a65bf0f93ea3e4c6d93b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 16:36:40.869673   11793 out.go:99] Downloading VM boot image ...
	I0908 16:36:40.869709   11793 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21504-7629/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0908 16:36:52.237460   11793 out.go:99] Starting "download-only-281275" primary control-plane node in "download-only-281275" cluster
	I0908 16:36:52.237487   11793 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 16:36:52.352299   11793 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0908 16:36:52.352343   11793 cache.go:58] Caching tarball of preloaded images
	I0908 16:36:52.352529   11793 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 16:36:52.354534   11793 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 16:36:52.354562   11793 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 16:36:52.466191   11793 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-281275 host does not exist
	  To start a cluster, run: "minikube start -p download-only-281275"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-281275
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (15.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-217769 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-217769 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (15.195145759s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (15.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 16:37:23.007658   11781 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0908 16:37:23.007693   11781 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-217769
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-217769: exit status 85 (60.270209ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-281275 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-281275 │ jenkins │ v1.36.0 │ 08 Sep 25 16:36 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 16:37 UTC │ 08 Sep 25 16:37 UTC │
	│ delete  │ -p download-only-281275                                                                                                                                                 │ download-only-281275 │ jenkins │ v1.36.0 │ 08 Sep 25 16:37 UTC │ 08 Sep 25 16:37 UTC │
	│ start   │ -o=json --download-only -p download-only-217769 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-217769 │ jenkins │ v1.36.0 │ 08 Sep 25 16:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 16:37:07
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 16:37:07.853050   12060 out.go:360] Setting OutFile to fd 1 ...
	I0908 16:37:07.853431   12060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:37:07.853491   12060 out.go:374] Setting ErrFile to fd 2...
	I0908 16:37:07.853501   12060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:37:07.853826   12060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
	I0908 16:37:07.854431   12060 out.go:368] Setting JSON to true
	I0908 16:37:07.855340   12060 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1171,"bootTime":1757348257,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 16:37:07.855428   12060 start.go:140] virtualization: kvm guest
	I0908 16:37:07.857328   12060 out.go:99] [download-only-217769] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 16:37:07.857470   12060 notify.go:220] Checking for updates...
	I0908 16:37:07.858732   12060 out.go:171] MINIKUBE_LOCATION=21504
	I0908 16:37:07.860066   12060 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 16:37:07.861231   12060 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 16:37:07.862438   12060 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 16:37:07.863704   12060 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 16:37:07.866045   12060 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 16:37:07.866237   12060 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 16:37:07.897841   12060 out.go:99] Using the kvm2 driver based on user configuration
	I0908 16:37:07.897867   12060 start.go:304] selected driver: kvm2
	I0908 16:37:07.897873   12060 start.go:918] validating driver "kvm2" against <nil>
	I0908 16:37:07.898160   12060 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 16:37:07.898224   12060 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21504-7629/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0908 16:37:07.913330   12060 install.go:137] /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0908 16:37:07.913400   12060 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 16:37:07.913988   12060 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0908 16:37:07.914158   12060 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 16:37:07.914191   12060 cni.go:84] Creating CNI manager for ""
	I0908 16:37:07.914272   12060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0908 16:37:07.914282   12060 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 16:37:07.914349   12060 start.go:348] cluster config:
	{Name:download-only-217769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-217769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 16:37:07.914457   12060 iso.go:125] acquiring lock: {Name:mkaf49872b434993209a65bf0f93ea3e4c6d93b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 16:37:07.916305   12060 out.go:99] Starting "download-only-217769" primary control-plane node in "download-only-217769" cluster
	I0908 16:37:07.916328   12060 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 16:37:08.023649   12060 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 16:37:08.023682   12060 cache.go:58] Caching tarball of preloaded images
	I0908 16:37:08.023856   12060 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 16:37:08.025772   12060 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0908 16:37:08.025797   12060 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 16:37:08.142111   12060 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2ff28357f4fb6607eaee8f503f8804cd -> /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 16:37:21.237214   12060 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 16:37:21.237311   12060 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21504-7629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 16:37:22.024001   12060 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 16:37:22.024363   12060 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/download-only-217769/config.json ...
	I0908 16:37:22.024392   12060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/download-only-217769/config.json: {Name:mk3d8ee591692e6108e0ebeeb907ddc35a56ee9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 16:37:22.024533   12060 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 16:37:22.024670   12060 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21504-7629/.minikube/cache/linux/amd64/v1.34.0/kubectl
	
	
	* The control-plane node download-only-217769 host does not exist
	  To start a cluster, run: "minikube start -p download-only-217769"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-217769
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 16:37:23.597454   11781 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-853404 --alsologtostderr --binary-mirror http://127.0.0.1:45175 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-853404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-853404
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (65.22s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-088150 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-088150 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.083089284s)
helpers_test.go:175: Cleaning up "offline-crio-088150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-088150
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-088150: (1.132298474s)
--- PASS: TestOffline (65.22s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-198632
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-198632: exit status 85 (52.354337ms)

                                                
                                                
-- stdout --
	* Profile "addons-198632" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-198632"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-198632
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-198632: exit status 85 (53.503571ms)

                                                
                                                
-- stdout --
	* Profile "addons-198632" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-198632"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (209.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-198632 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-198632 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m29.315492523s)
--- PASS: TestAddons/Setup (209.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-198632 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-198632 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-198632 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-198632 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [68ecb2ec-8675-4c4b-8edf-5e76a3a05382] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [68ecb2ec-8675-4c4b-8edf-5e76a3a05382] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004387318s
addons_test.go:694: (dbg) Run:  kubectl --context addons-198632 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-198632 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-198632 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.473357ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-2rfbq" [2e4a3af7-b70f-45f1-a394-ac4021197c28] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005196319s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-np9l2" [8effc7a9-f128-4316-b4c5-e7bfa6c1a551] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006658514s
addons_test.go:392: (dbg) Run:  kubectl --context addons-198632 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-198632 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-198632 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.730597293s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 ip
2025/09/08 16:41:31 [DEBUG] GET http://192.168.39.229:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.64s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.23s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 7.037549ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-198632
addons_test.go:332: (dbg) Run:  kubectl --context addons-198632 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198632 addons disable registry-creds --alsologtostderr -v=1: (1.045644383s)
--- PASS: TestAddons/parallel/RegistryCreds (1.23s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-fs8hb" [28f4e5bf-bd39-48fa-93c9-f5c6e419bd61] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003688796s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 15.670153ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-lxt6t" [a7f04275-2bea-4a1b-a130-7c1ce5d784b4] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006386793s
addons_test.go:463: (dbg) Run:  kubectl --context addons-198632 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 16:41:26.924210   11781 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 16:41:26.929456   11781 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 16:41:26.929486   11781 kapi.go:107] duration metric: took 5.300954ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.313448ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-198632 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-198632 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c615d344-c7aa-435d-b071-df45cbb218cb] Pending
helpers_test.go:352: "task-pv-pod" [c615d344-c7aa-435d-b071-df45cbb218cb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c615d344-c7aa-435d-b071-df45cbb218cb] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004546248s
addons_test.go:572: (dbg) Run:  kubectl --context addons-198632 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-198632 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-198632 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-198632 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-198632 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-198632 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-198632 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [79e390fa-bd60-446d-b674-6c5aca99fb57] Pending
helpers_test.go:352: "task-pv-pod-restore" [79e390fa-bd60-446d-b674-6c5aca99fb57] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [79e390fa-bd60-446d-b674-6c5aca99fb57] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004560114s
addons_test.go:614: (dbg) Run:  kubectl --context addons-198632 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-198632 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-198632 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198632 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.959240399s)
--- PASS: TestAddons/parallel/CSI (57.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-198632 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-198632 --alsologtostderr -v=1: (1.241083482s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-8wxc7" [daf97643-07ba-4d7a-bd66-414516e4de62] Pending
helpers_test.go:352: "headlamp-6f46646d79-8wxc7" [daf97643-07ba-4d7a-bd66-414516e4de62] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-8wxc7" [daf97643-07ba-4d7a-bd66-414516e4de62] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-8wxc7" [daf97643-07ba-4d7a-bd66-414516e4de62] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.00649741s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198632 addons disable headlamp --alsologtostderr -v=1: (6.73411599s)
--- PASS: TestAddons/parallel/Headlamp (22.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-m6zqg" [428895a4-1fee-48ae-bb8b-b3464381069a] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006168433s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198632 addons disable cloud-spanner --alsologtostderr -v=1: (1.237422555s)
--- PASS: TestAddons/parallel/CloudSpanner (6.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (16.2s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-198632 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-198632 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-198632 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [50c13c3a-e504-474d-8ce1-69dcf1c0b4e1] Pending
helpers_test.go:352: "test-local-path" [50c13c3a-e504-474d-8ce1-69dcf1c0b4e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [50c13c3a-e504-474d-8ce1-69dcf1c0b4e1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [50c13c3a-e504-474d-8ce1-69dcf1c0b4e1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.004160284s
addons_test.go:967: (dbg) Run:  kubectl --context addons-198632 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 ssh "cat /opt/local-path-provisioner/pvc-865d9e29-b122-40d8-9365-422fafd2157b_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-198632 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-198632 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (16.20s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-95xl2" [5441acea-71a3-45ab-b5f1-235609d5d13c] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003539785s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198632 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.000344547s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.00s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-qn7dv" [2839b85f-b1ca-42c0-89b6-244db678833e] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00389795s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-198632 addons disable yakd --alsologtostderr -v=1: (6.342699911s)
--- PASS: TestAddons/parallel/Yakd (12.35s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-198632
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-198632: (1m30.991661948s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-198632
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-198632
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-198632
--- PASS: TestAddons/StoppedEnableDisable (91.27s)

                                                
                                    
x
+
TestCertOptions (69.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-822744 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-822744 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m7.776484577s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-822744 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-822744 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-822744 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-822744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-822744
--- PASS: TestCertOptions (69.03s)

                                                
                                    
x
+
TestCertExpiration (305.76s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-778398 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-778398 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m32.461818358s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-778398 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-778398 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (32.479475626s)
helpers_test.go:175: Cleaning up "cert-expiration-778398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-778398
--- PASS: TestCertExpiration (305.76s)

                                                
                                    
x
+
TestForceSystemdFlag (73.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-637301 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
I0908 17:39:24.184877   11781 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 17:39:24.200510   11781 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0908 17:39:24.228049   11781 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0908 17:39:24.228074   11781 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0908 17:39:24.228129   11781 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 17:39:24.228151   11781 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2985582854/002/docker-machine-driver-kvm2
I0908 17:39:24.289605   11781 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2985582854/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc0006112a0 gz:0xc0006112a8 tar:0xc000611250 tar.bz2:0xc000611260 tar.gz:0xc000611270 tar.xz:0xc000611280 tar.zst:0xc000611290 tbz2:0xc000611260 tgz:0xc000611270 txz:0xc000611280 tzst:0xc000611290 xz:0xc0006112b0 zip:0xc0006112c0 zst:0xc0006112b8] Getters:map[file:0xc001d1af20 http:0xc00029da40 https:0xc00029da90] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0908 17:39:24.289648   11781 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2985582854/002/docker-machine-driver-kvm2
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-637301 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m12.586014793s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-637301 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-637301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-637301
--- PASS: TestForceSystemdFlag (73.60s)

                                                
                                    
x
+
TestForceSystemdEnv (74.48s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-113303 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-113303 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m13.607639129s)
helpers_test.go:175: Cleaning up "force-systemd-env-113303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-113303
--- PASS: TestForceSystemdEnv (74.48s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.25s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0908 17:39:21.852270   11781 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 17:39:21.852449   11781 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0908 17:39:21.880634   11781 install.go:62] docker-machine-driver-kvm2: exit status 1
W0908 17:39:21.880868   11781 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 17:39:21.880938   11781 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2985582854/001/docker-machine-driver-kvm2
I0908 17:39:22.196857   11781 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2985582854/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc0006112a0 gz:0xc0006112a8 tar:0xc000611250 tar.bz2:0xc000611260 tar.gz:0xc000611270 tar.xz:0xc000611280 tar.zst:0xc000611290 tbz2:0xc000611260 tgz:0xc000611270 txz:0xc000611280 tzst:0xc000611290 xz:0xc0006112b0 zip:0xc0006112c0 zst:0xc0006112b8] Getters:map[file:0xc001d1a330 http:0xc0006310e0 https:0xc000631540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0908 17:39:22.196932   11781 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2985582854/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (8.25s)

                                                
                                    
x
+
TestErrorSpam/setup (45.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-750500 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-750500 --driver=kvm2  --container-runtime=crio
E0908 16:45:54.298704   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:54.305116   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:54.316596   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:54.338037   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:54.379546   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:54.461158   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:54.622717   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:54.944493   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:55.586530   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:56.868621   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:45:59.431544   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:46:04.553094   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:46:14.794677   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:46:35.276150   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-750500 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-750500 --driver=kvm2  --container-runtime=crio: (45.805750796s)
--- PASS: TestErrorSpam/setup (45.81s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 pause
--- PASS: TestErrorSpam/pause (1.78s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.01s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 unpause
--- PASS: TestErrorSpam/unpause (2.01s)

                                                
                                    
x
+
TestErrorSpam/stop (5.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 stop: (2.346960944s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 stop: (1.36634732s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-750500 --log_dir /tmp/nospam-750500 stop: (1.694835454s)
--- PASS: TestErrorSpam/stop (5.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21504-7629/.minikube/files/etc/test/nested/copy/11781/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (87.35s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-504207 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0908 16:47:16.237824   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-504207 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m27.349652112s)
--- PASS: TestFunctional/serial/StartWithProxy (87.35s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 16:48:15.940036   11781 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-504207 --alsologtostderr -v=8
E0908 16:48:38.159936   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-504207 --alsologtostderr -v=8: (33.683212383s)
functional_test.go:678: soft start took 33.683898892s for "functional-504207" cluster.
I0908 16:48:49.623553   11781 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (33.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-504207 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-504207 cache add registry.k8s.io/pause:3.1: (1.129066645s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-504207 cache add registry.k8s.io/pause:3.3: (1.263761086s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-504207 cache add registry.k8s.io/pause:latest: (1.096013937s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-504207 /tmp/TestFunctionalserialCacheCmdcacheadd_local1506275088/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 cache add minikube-local-cache-test:functional-504207
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-504207 cache add minikube-local-cache-test:functional-504207: (1.997593934s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 cache delete minikube-local-cache-test:functional-504207
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-504207
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504207 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (224.727709ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-504207 cache reload: (1.030815045s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 kubectl -- --context functional-504207 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-504207 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-504207 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-504207 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.602299541s)
functional_test.go:776: restart took 42.602461302s for "functional-504207" cluster.
I0908 16:49:40.544510   11781 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (42.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-504207 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-504207 logs: (1.477004817s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 logs --file /tmp/TestFunctionalserialLogsFileCmd2782805973/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-504207 logs --file /tmp/TestFunctionalserialLogsFileCmd2782805973/001/logs.txt: (1.491362537s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-504207 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-504207
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-504207: exit status 115 (293.700823ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.246:31186 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-504207 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504207 config get cpus: exit status 14 (51.614478ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504207 config get cpus: exit status 14 (45.847005ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-504207 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-504207 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 20347: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.99s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-504207 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-504207 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.813913ms)

                                                
                                                
-- stdout --
	* [functional-504207] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 16:50:12.654930   20202 out.go:360] Setting OutFile to fd 1 ...
	I0908 16:50:12.655169   20202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:50:12.655179   20202 out.go:374] Setting ErrFile to fd 2...
	I0908 16:50:12.655183   20202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:50:12.655473   20202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
	I0908 16:50:12.656136   20202 out.go:368] Setting JSON to false
	I0908 16:50:12.656975   20202 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1956,"bootTime":1757348257,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 16:50:12.657073   20202 start.go:140] virtualization: kvm guest
	I0908 16:50:12.658492   20202 out.go:179] * [functional-504207] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 16:50:12.660100   20202 notify.go:220] Checking for updates...
	I0908 16:50:12.660133   20202 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 16:50:12.661326   20202 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 16:50:12.662422   20202 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 16:50:12.663531   20202 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 16:50:12.664707   20202 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 16:50:12.665748   20202 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 16:50:12.667070   20202 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 16:50:12.667551   20202 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:50:12.667625   20202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:50:12.684974   20202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
	I0908 16:50:12.685482   20202 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:50:12.686005   20202 main.go:141] libmachine: Using API Version  1
	I0908 16:50:12.686032   20202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:50:12.686398   20202 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:50:12.686574   20202 main.go:141] libmachine: (functional-504207) Calling .DriverName
	I0908 16:50:12.686842   20202 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 16:50:12.687132   20202 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:50:12.687164   20202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:50:12.704438   20202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I0908 16:50:12.704850   20202 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:50:12.705238   20202 main.go:141] libmachine: Using API Version  1
	I0908 16:50:12.705261   20202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:50:12.705647   20202 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:50:12.705827   20202 main.go:141] libmachine: (functional-504207) Calling .DriverName
	I0908 16:50:12.740287   20202 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 16:50:12.741503   20202 start.go:304] selected driver: kvm2
	I0908 16:50:12.741525   20202 start.go:918] validating driver "kvm2" against &{Name:functional-504207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-504207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 16:50:12.741664   20202 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 16:50:12.744204   20202 out.go:203] 
	W0908 16:50:12.745610   20202 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 16:50:12.746896   20202 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-504207 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-504207 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-504207 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (137.873989ms)

                                                
                                                
-- stdout --
	* [functional-504207] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 16:50:11.317015   19956 out.go:360] Setting OutFile to fd 1 ...
	I0908 16:50:11.317121   19956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:50:11.317126   19956 out.go:374] Setting ErrFile to fd 2...
	I0908 16:50:11.317130   19956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:50:11.317398   19956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
	I0908 16:50:11.318000   19956 out.go:368] Setting JSON to false
	I0908 16:50:11.318919   19956 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1954,"bootTime":1757348257,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 16:50:11.319011   19956 start.go:140] virtualization: kvm guest
	I0908 16:50:11.320830   19956 out.go:179] * [functional-504207] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0908 16:50:11.321724   19956 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 16:50:11.321748   19956 notify.go:220] Checking for updates...
	I0908 16:50:11.324008   19956 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 16:50:11.325185   19956 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 16:50:11.326320   19956 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 16:50:11.327852   19956 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 16:50:11.329046   19956 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 16:50:11.330626   19956 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 16:50:11.331023   19956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:50:11.331086   19956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:50:11.346711   19956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I0908 16:50:11.347124   19956 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:50:11.347701   19956 main.go:141] libmachine: Using API Version  1
	I0908 16:50:11.347730   19956 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:50:11.348120   19956 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:50:11.348347   19956 main.go:141] libmachine: (functional-504207) Calling .DriverName
	I0908 16:50:11.348602   19956 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 16:50:11.348941   19956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:50:11.348989   19956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:50:11.363812   19956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41465
	I0908 16:50:11.364366   19956 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:50:11.364884   19956 main.go:141] libmachine: Using API Version  1
	I0908 16:50:11.364919   19956 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:50:11.365228   19956 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:50:11.365428   19956 main.go:141] libmachine: (functional-504207) Calling .DriverName
	I0908 16:50:11.400475   19956 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0908 16:50:11.402037   19956 start.go:304] selected driver: kvm2
	I0908 16:50:11.402054   19956 start.go:918] validating driver "kvm2" against &{Name:functional-504207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-504207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 16:50:11.402191   19956 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 16:50:11.404364   19956 out.go:203] 
	W0908 16:50:11.406106   19956 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 16:50:11.407333   19956 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (20.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-504207 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-504207 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-5ld7k" [d11e332e-9dd9-4194-85ec-91ddd63c3c9c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-5ld7k" [d11e332e-9dd9-4194-85ec-91ddd63c3c9c] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 20.00447865s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.246:30293
functional_test.go:1680: http://192.168.39.246:30293: success! body:
Request served by hello-node-connect-7d85dfc575-5ld7k

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.246:30293
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (20.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [db7ab3ee-d423-4631-9f52-c9dc827c3b6a] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003644276s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-504207 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-504207 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-504207 get pvc myclaim -o=json
I0908 16:49:54.931470   11781 retry.go:31] will retry after 2.940988288s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:08bd1add-f947-42d5-addd-4759fe83fe81 ResourceVersion:739 Generation:0 CreationTimestamp:2025-09-08 16:49:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a61ea0 VolumeMode:0xc001a61eb0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-504207 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-504207 apply -f testdata/storage-provisioner/pod.yaml
I0908 16:49:58.060619   11781 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f1192ee1-485b-4805-b2d6-9365b93de4cf] Pending
helpers_test.go:352: "sp-pod" [f1192ee1-485b-4805-b2d6-9365b93de4cf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [f1192ee1-485b-4805-b2d6-9365b93de4cf] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.00474209s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-504207 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-504207 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-504207 apply -f testdata/storage-provisioner/pod.yaml
I0908 16:50:18.137234   11781 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [19707721-5dca-4ddd-ade1-7be710edd1c0] Pending
helpers_test.go:352: "sp-pod" [19707721-5dca-4ddd-ade1-7be710edd1c0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [19707721-5dca-4ddd-ade1-7be710edd1c0] Running
2025/09/08 16:50:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.005902897s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-504207 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh -n functional-504207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 cp functional-504207:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2958313711/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh -n functional-504207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh -n functional-504207 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-504207 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-z9q5t" [75c3fdb6-9dc7-427e-8cda-aabfeea9cc58] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-z9q5t" [75c3fdb6-9dc7-427e-8cda-aabfeea9cc58] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.050658952s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-504207 exec mysql-5bb876957f-z9q5t -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-504207 exec mysql-5bb876957f-z9q5t -- mysql -ppassword -e "show databases;": exit status 1 (250.361104ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 16:50:08.539076   11781 retry.go:31] will retry after 609.057309ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-504207 exec mysql-5bb876957f-z9q5t -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-504207 exec mysql-5bb876957f-z9q5t -- mysql -ppassword -e "show databases;": exit status 1 (178.730887ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0908 16:50:09.327444   11781 retry.go:31] will retry after 2.199141719s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-504207 exec mysql-5bb876957f-z9q5t -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.70s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/11781/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "sudo cat /etc/test/nested/copy/11781/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/11781.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "sudo cat /etc/ssl/certs/11781.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/11781.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "sudo cat /usr/share/ca-certificates/11781.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/117812.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "sudo cat /etc/ssl/certs/117812.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/117812.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "sudo cat /usr/share/ca-certificates/117812.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-504207 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504207 ssh "sudo systemctl is-active docker": exit status 1 (235.952094ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504207 ssh "sudo systemctl is-active containerd": exit status 1 (227.537681ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-504207 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-504207
localhost/kicbase/echo-server:functional-504207
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-504207 image ls --format short --alsologtostderr:
I0908 16:50:22.610760   20815 out.go:360] Setting OutFile to fd 1 ...
I0908 16:50:22.610998   20815 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:50:22.611008   20815 out.go:374] Setting ErrFile to fd 2...
I0908 16:50:22.611012   20815 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:50:22.611259   20815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
I0908 16:50:22.611951   20815 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:50:22.612078   20815 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:50:22.612446   20815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
I0908 16:50:22.612497   20815 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 16:50:22.628036   20815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45447
I0908 16:50:22.628443   20815 main.go:141] libmachine: () Calling .GetVersion
I0908 16:50:22.629005   20815 main.go:141] libmachine: Using API Version  1
I0908 16:50:22.629029   20815 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 16:50:22.629416   20815 main.go:141] libmachine: () Calling .GetMachineName
I0908 16:50:22.629603   20815 main.go:141] libmachine: (functional-504207) Calling .GetState
I0908 16:50:22.631456   20815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
I0908 16:50:22.631492   20815 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 16:50:22.647356   20815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38187
I0908 16:50:22.647825   20815 main.go:141] libmachine: () Calling .GetVersion
I0908 16:50:22.648271   20815 main.go:141] libmachine: Using API Version  1
I0908 16:50:22.648294   20815 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 16:50:22.648677   20815 main.go:141] libmachine: () Calling .GetMachineName
I0908 16:50:22.648878   20815 main.go:141] libmachine: (functional-504207) Calling .DriverName
I0908 16:50:22.649076   20815 ssh_runner.go:195] Run: systemctl --version
I0908 16:50:22.649099   20815 main.go:141] libmachine: (functional-504207) Calling .GetSSHHostname
I0908 16:50:22.651819   20815 main.go:141] libmachine: (functional-504207) DBG | domain functional-504207 has defined MAC address 52:54:00:82:94:88 in network mk-functional-504207
I0908 16:50:22.652230   20815 main.go:141] libmachine: (functional-504207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:88", ip: ""} in network mk-functional-504207: {Iface:virbr1 ExpiryTime:2025-09-08 17:47:04 +0000 UTC Type:0 Mac:52:54:00:82:94:88 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:functional-504207 Clientid:01:52:54:00:82:94:88}
I0908 16:50:22.652261   20815 main.go:141] libmachine: (functional-504207) DBG | domain functional-504207 has defined IP address 192.168.39.246 and MAC address 52:54:00:82:94:88 in network mk-functional-504207
I0908 16:50:22.652387   20815 main.go:141] libmachine: (functional-504207) Calling .GetSSHPort
I0908 16:50:22.652541   20815 main.go:141] libmachine: (functional-504207) Calling .GetSSHKeyPath
I0908 16:50:22.652711   20815 main.go:141] libmachine: (functional-504207) Calling .GetSSHUsername
I0908 16:50:22.652847   20815 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/functional-504207/id_rsa Username:docker}
I0908 16:50:22.746222   20815 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 16:50:22.819280   20815 main.go:141] libmachine: Making call to close driver server
I0908 16:50:22.819299   20815 main.go:141] libmachine: (functional-504207) Calling .Close
I0908 16:50:22.819600   20815 main.go:141] libmachine: Successfully made call to close driver server
I0908 16:50:22.819619   20815 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 16:50:22.819628   20815 main.go:141] libmachine: Making call to close driver server
I0908 16:50:22.819636   20815 main.go:141] libmachine: (functional-504207) Calling .Close
I0908 16:50:22.819802   20815 main.go:141] libmachine: Successfully made call to close driver server
I0908 16:50:22.819816   20815 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 16:50:22.819821   20815 main.go:141] libmachine: (functional-504207) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-504207 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-504207  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ localhost/minikube-local-cache-test     │ functional-504207  │ a99285ff4d2e8 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/nginx                 │ latest             │ ad5708199ec7d │ 197MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-504207 image ls --format table --alsologtostderr:
I0908 16:50:26.003876   21233 out.go:360] Setting OutFile to fd 1 ...
I0908 16:50:26.004111   21233 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:50:26.004121   21233 out.go:374] Setting ErrFile to fd 2...
I0908 16:50:26.004128   21233 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:50:26.004340   21233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
I0908 16:50:26.004854   21233 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:50:26.004939   21233 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:50:26.005273   21233 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
I0908 16:50:26.005317   21233 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 16:50:26.021569   21233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
I0908 16:50:26.022020   21233 main.go:141] libmachine: () Calling .GetVersion
I0908 16:50:26.022553   21233 main.go:141] libmachine: Using API Version  1
I0908 16:50:26.022576   21233 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 16:50:26.022927   21233 main.go:141] libmachine: () Calling .GetMachineName
I0908 16:50:26.023125   21233 main.go:141] libmachine: (functional-504207) Calling .GetState
I0908 16:50:26.025210   21233 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
I0908 16:50:26.025246   21233 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 16:50:26.039650   21233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
I0908 16:50:26.040102   21233 main.go:141] libmachine: () Calling .GetVersion
I0908 16:50:26.040688   21233 main.go:141] libmachine: Using API Version  1
I0908 16:50:26.040712   21233 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 16:50:26.040989   21233 main.go:141] libmachine: () Calling .GetMachineName
I0908 16:50:26.041174   21233 main.go:141] libmachine: (functional-504207) Calling .DriverName
I0908 16:50:26.041375   21233 ssh_runner.go:195] Run: systemctl --version
I0908 16:50:26.041402   21233 main.go:141] libmachine: (functional-504207) Calling .GetSSHHostname
I0908 16:50:26.044330   21233 main.go:141] libmachine: (functional-504207) DBG | domain functional-504207 has defined MAC address 52:54:00:82:94:88 in network mk-functional-504207
I0908 16:50:26.044686   21233 main.go:141] libmachine: (functional-504207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:88", ip: ""} in network mk-functional-504207: {Iface:virbr1 ExpiryTime:2025-09-08 17:47:04 +0000 UTC Type:0 Mac:52:54:00:82:94:88 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:functional-504207 Clientid:01:52:54:00:82:94:88}
I0908 16:50:26.044713   21233 main.go:141] libmachine: (functional-504207) DBG | domain functional-504207 has defined IP address 192.168.39.246 and MAC address 52:54:00:82:94:88 in network mk-functional-504207
I0908 16:50:26.044843   21233 main.go:141] libmachine: (functional-504207) Calling .GetSSHPort
I0908 16:50:26.044997   21233 main.go:141] libmachine: (functional-504207) Calling .GetSSHKeyPath
I0908 16:50:26.045124   21233 main.go:141] libmachine: (functional-504207) Calling .GetSSHUsername
I0908 16:50:26.045265   21233 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/functional-504207/id_rsa Username:docker}
I0908 16:50:26.143704   21233 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 16:50:26.208392   21233 main.go:141] libmachine: Making call to close driver server
I0908 16:50:26.208408   21233 main.go:141] libmachine: (functional-504207) Calling .Close
I0908 16:50:26.208672   21233 main.go:141] libmachine: Successfully made call to close driver server
I0908 16:50:26.208691   21233 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 16:50:26.208702   21233 main.go:141] libmachine: Making call to close driver server
I0908 16:50:26.208710   21233 main.go:141] libmachine: (functional-504207) Calling .Close
I0908 16:50:26.208716   21233 main.go:141] libmachine: (functional-504207) DBG | Closing plugin on server side
I0908 16:50:26.208875   21233 main.go:141] libmachine: Successfully made call to close driver server
I0908 16:50:26.208888   21233 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-504207 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"a99285ff4d2e8347c07d3edd1baead30dbd79772c367491828c9d1079d16c5f2","repoDigests":["localhost/minikube-local-cache-test@sha256:6c762a432edc1823165042e2bc221017bb35b834098512a97d677ab74804681d"],"repoTags":["localhost/minikube-local-cache-test:functional-504207"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566
636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
"docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/k
icbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-504207"],"size":"4943877"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1f
aaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["regi
stry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57","docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7"],"repoTags":["docker.io/library/nginx:latest"],"size":"196544386"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c12
8c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-504207 image ls --format json --alsologtostderr:
I0908 16:50:25.761867   21208 out.go:360] Setting OutFile to fd 1 ...
I0908 16:50:25.762110   21208 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:50:25.762120   21208 out.go:374] Setting ErrFile to fd 2...
I0908 16:50:25.762125   21208 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:50:25.762345   21208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
I0908 16:50:25.762918   21208 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:50:25.762999   21208 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:50:25.763428   21208 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
I0908 16:50:25.763510   21208 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 16:50:25.779076   21208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
I0908 16:50:25.779611   21208 main.go:141] libmachine: () Calling .GetVersion
I0908 16:50:25.780316   21208 main.go:141] libmachine: Using API Version  1
I0908 16:50:25.780342   21208 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 16:50:25.780702   21208 main.go:141] libmachine: () Calling .GetMachineName
I0908 16:50:25.780875   21208 main.go:141] libmachine: (functional-504207) Calling .GetState
I0908 16:50:25.782806   21208 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
I0908 16:50:25.782851   21208 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 16:50:25.798069   21208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
I0908 16:50:25.798565   21208 main.go:141] libmachine: () Calling .GetVersion
I0908 16:50:25.799051   21208 main.go:141] libmachine: Using API Version  1
I0908 16:50:25.799080   21208 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 16:50:25.799573   21208 main.go:141] libmachine: () Calling .GetMachineName
I0908 16:50:25.799776   21208 main.go:141] libmachine: (functional-504207) Calling .DriverName
I0908 16:50:25.800056   21208 ssh_runner.go:195] Run: systemctl --version
I0908 16:50:25.800080   21208 main.go:141] libmachine: (functional-504207) Calling .GetSSHHostname
I0908 16:50:25.803037   21208 main.go:141] libmachine: (functional-504207) DBG | domain functional-504207 has defined MAC address 52:54:00:82:94:88 in network mk-functional-504207
I0908 16:50:25.803497   21208 main.go:141] libmachine: (functional-504207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:88", ip: ""} in network mk-functional-504207: {Iface:virbr1 ExpiryTime:2025-09-08 17:47:04 +0000 UTC Type:0 Mac:52:54:00:82:94:88 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:functional-504207 Clientid:01:52:54:00:82:94:88}
I0908 16:50:25.803527   21208 main.go:141] libmachine: (functional-504207) DBG | domain functional-504207 has defined IP address 192.168.39.246 and MAC address 52:54:00:82:94:88 in network mk-functional-504207
I0908 16:50:25.803700   21208 main.go:141] libmachine: (functional-504207) Calling .GetSSHPort
I0908 16:50:25.803890   21208 main.go:141] libmachine: (functional-504207) Calling .GetSSHKeyPath
I0908 16:50:25.804036   21208 main.go:141] libmachine: (functional-504207) Calling .GetSSHUsername
I0908 16:50:25.804177   21208 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/functional-504207/id_rsa Username:docker}
I0908 16:50:25.899009   21208 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 16:50:25.956404   21208 main.go:141] libmachine: Making call to close driver server
I0908 16:50:25.956418   21208 main.go:141] libmachine: (functional-504207) Calling .Close
I0908 16:50:25.956674   21208 main.go:141] libmachine: (functional-504207) DBG | Closing plugin on server side
I0908 16:50:25.956674   21208 main.go:141] libmachine: Successfully made call to close driver server
I0908 16:50:25.956698   21208 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 16:50:25.956708   21208 main.go:141] libmachine: Making call to close driver server
I0908 16:50:25.956716   21208 main.go:141] libmachine: (functional-504207) Calling .Close
I0908 16:50:25.956938   21208 main.go:141] libmachine: Successfully made call to close driver server
I0908 16:50:25.956952   21208 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 16:50:25.956966   21208 main.go:141] libmachine: (functional-504207) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-504207 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-504207
size: "4943877"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
- docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7
repoTags:
- docker.io/library/nginx:latest
size: "196544386"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a99285ff4d2e8347c07d3edd1baead30dbd79772c367491828c9d1079d16c5f2
repoDigests:
- localhost/minikube-local-cache-test@sha256:6c762a432edc1823165042e2bc221017bb35b834098512a97d677ab74804681d
repoTags:
- localhost/minikube-local-cache-test:functional-504207
size: "3330"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-504207 image ls --format yaml --alsologtostderr:
I0908 16:50:22.869725   20839 out.go:360] Setting OutFile to fd 1 ...
I0908 16:50:22.869955   20839 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:50:22.869966   20839 out.go:374] Setting ErrFile to fd 2...
I0908 16:50:22.869973   20839 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:50:22.870188   20839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
I0908 16:50:22.870762   20839 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:50:22.870852   20839 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:50:22.871229   20839 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
I0908 16:50:22.871278   20839 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 16:50:22.886486   20839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35631
I0908 16:50:22.886986   20839 main.go:141] libmachine: () Calling .GetVersion
I0908 16:50:22.887574   20839 main.go:141] libmachine: Using API Version  1
I0908 16:50:22.887598   20839 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 16:50:22.887932   20839 main.go:141] libmachine: () Calling .GetMachineName
I0908 16:50:22.888123   20839 main.go:141] libmachine: (functional-504207) Calling .GetState
I0908 16:50:22.889861   20839 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
I0908 16:50:22.889928   20839 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 16:50:22.904758   20839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39483
I0908 16:50:22.905261   20839 main.go:141] libmachine: () Calling .GetVersion
I0908 16:50:22.905834   20839 main.go:141] libmachine: Using API Version  1
I0908 16:50:22.905860   20839 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 16:50:22.906176   20839 main.go:141] libmachine: () Calling .GetMachineName
I0908 16:50:22.906343   20839 main.go:141] libmachine: (functional-504207) Calling .DriverName
I0908 16:50:22.906529   20839 ssh_runner.go:195] Run: systemctl --version
I0908 16:50:22.906550   20839 main.go:141] libmachine: (functional-504207) Calling .GetSSHHostname
I0908 16:50:22.909436   20839 main.go:141] libmachine: (functional-504207) DBG | domain functional-504207 has defined MAC address 52:54:00:82:94:88 in network mk-functional-504207
I0908 16:50:22.909908   20839 main.go:141] libmachine: (functional-504207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:88", ip: ""} in network mk-functional-504207: {Iface:virbr1 ExpiryTime:2025-09-08 17:47:04 +0000 UTC Type:0 Mac:52:54:00:82:94:88 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:functional-504207 Clientid:01:52:54:00:82:94:88}
I0908 16:50:22.909939   20839 main.go:141] libmachine: (functional-504207) DBG | domain functional-504207 has defined IP address 192.168.39.246 and MAC address 52:54:00:82:94:88 in network mk-functional-504207
I0908 16:50:22.910113   20839 main.go:141] libmachine: (functional-504207) Calling .GetSSHPort
I0908 16:50:22.910320   20839 main.go:141] libmachine: (functional-504207) Calling .GetSSHKeyPath
I0908 16:50:22.910493   20839 main.go:141] libmachine: (functional-504207) Calling .GetSSHUsername
I0908 16:50:22.910618   20839 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/functional-504207/id_rsa Username:docker}
I0908 16:50:23.008245   20839 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 16:50:23.078961   20839 main.go:141] libmachine: Making call to close driver server
I0908 16:50:23.078974   20839 main.go:141] libmachine: (functional-504207) Calling .Close
I0908 16:50:23.079260   20839 main.go:141] libmachine: Successfully made call to close driver server
I0908 16:50:23.079271   20839 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 16:50:23.079279   20839 main.go:141] libmachine: Making call to close driver server
I0908 16:50:23.079283   20839 main.go:141] libmachine: (functional-504207) Calling .Close
I0908 16:50:23.079509   20839 main.go:141] libmachine: Successfully made call to close driver server
I0908 16:50:23.079528   20839 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504207 ssh pgrep buildkitd: exit status 1 (236.975293ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image build -t localhost/my-image:functional-504207 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-504207 image build -t localhost/my-image:functional-504207 testdata/build --alsologtostderr: (5.974800621s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-504207 image build -t localhost/my-image:functional-504207 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3b806a954ad
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-504207
--> 58c2e51a1f4
Successfully tagged localhost/my-image:functional-504207
58c2e51a1f4666f8742cb5556cd717b413095d0baa21225c88aa90f4912d900f
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-504207 image build -t localhost/my-image:functional-504207 testdata/build --alsologtostderr:
I0908 16:50:23.369311   20954 out.go:360] Setting OutFile to fd 1 ...
I0908 16:50:23.369470   20954 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:50:23.369480   20954 out.go:374] Setting ErrFile to fd 2...
I0908 16:50:23.369484   20954 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 16:50:23.369653   20954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
I0908 16:50:23.370207   20954 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:50:23.370841   20954 config.go:182] Loaded profile config "functional-504207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 16:50:23.371184   20954 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
I0908 16:50:23.371221   20954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 16:50:23.386463   20954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
I0908 16:50:23.387058   20954 main.go:141] libmachine: () Calling .GetVersion
I0908 16:50:23.387643   20954 main.go:141] libmachine: Using API Version  1
I0908 16:50:23.387668   20954 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 16:50:23.388008   20954 main.go:141] libmachine: () Calling .GetMachineName
I0908 16:50:23.388169   20954 main.go:141] libmachine: (functional-504207) Calling .GetState
I0908 16:50:23.389960   20954 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
I0908 16:50:23.390000   20954 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 16:50:23.405942   20954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40703
I0908 16:50:23.406493   20954 main.go:141] libmachine: () Calling .GetVersion
I0908 16:50:23.407107   20954 main.go:141] libmachine: Using API Version  1
I0908 16:50:23.407141   20954 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 16:50:23.407511   20954 main.go:141] libmachine: () Calling .GetMachineName
I0908 16:50:23.407679   20954 main.go:141] libmachine: (functional-504207) Calling .DriverName
I0908 16:50:23.407899   20954 ssh_runner.go:195] Run: systemctl --version
I0908 16:50:23.407919   20954 main.go:141] libmachine: (functional-504207) Calling .GetSSHHostname
I0908 16:50:23.411330   20954 main.go:141] libmachine: (functional-504207) DBG | domain functional-504207 has defined MAC address 52:54:00:82:94:88 in network mk-functional-504207
I0908 16:50:23.411782   20954 main.go:141] libmachine: (functional-504207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:94:88", ip: ""} in network mk-functional-504207: {Iface:virbr1 ExpiryTime:2025-09-08 17:47:04 +0000 UTC Type:0 Mac:52:54:00:82:94:88 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:functional-504207 Clientid:01:52:54:00:82:94:88}
I0908 16:50:23.411815   20954 main.go:141] libmachine: (functional-504207) DBG | domain functional-504207 has defined IP address 192.168.39.246 and MAC address 52:54:00:82:94:88 in network mk-functional-504207
I0908 16:50:23.411993   20954 main.go:141] libmachine: (functional-504207) Calling .GetSSHPort
I0908 16:50:23.412174   20954 main.go:141] libmachine: (functional-504207) Calling .GetSSHKeyPath
I0908 16:50:23.412331   20954 main.go:141] libmachine: (functional-504207) Calling .GetSSHUsername
I0908 16:50:23.412461   20954 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/functional-504207/id_rsa Username:docker}
I0908 16:50:23.508286   20954 build_images.go:161] Building image from path: /tmp/build.1231714838.tar
I0908 16:50:23.508353   20954 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 16:50:23.535530   20954 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1231714838.tar
I0908 16:50:23.544125   20954 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1231714838.tar: stat -c "%s %y" /var/lib/minikube/build/build.1231714838.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1231714838.tar': No such file or directory
I0908 16:50:23.544161   20954 ssh_runner.go:362] scp /tmp/build.1231714838.tar --> /var/lib/minikube/build/build.1231714838.tar (3072 bytes)
I0908 16:50:23.603582   20954 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1231714838
I0908 16:50:23.632030   20954 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1231714838 -xf /var/lib/minikube/build/build.1231714838.tar
I0908 16:50:23.656251   20954 crio.go:315] Building image: /var/lib/minikube/build/build.1231714838
I0908 16:50:23.656342   20954 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-504207 /var/lib/minikube/build/build.1231714838 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0908 16:50:29.236965   20954 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-504207 /var/lib/minikube/build/build.1231714838 --cgroup-manager=cgroupfs: (5.580584474s)
I0908 16:50:29.237033   20954 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1231714838
I0908 16:50:29.270737   20954 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1231714838.tar
I0908 16:50:29.293019   20954 build_images.go:217] Built localhost/my-image:functional-504207 from /tmp/build.1231714838.tar
I0908 16:50:29.293062   20954 build_images.go:133] succeeded building to: functional-504207
I0908 16:50:29.293068   20954 build_images.go:134] failed building to: 
I0908 16:50:29.293096   20954 main.go:141] libmachine: Making call to close driver server
I0908 16:50:29.293107   20954 main.go:141] libmachine: (functional-504207) Calling .Close
I0908 16:50:29.293388   20954 main.go:141] libmachine: Successfully made call to close driver server
I0908 16:50:29.293408   20954 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 16:50:29.293417   20954 main.go:141] libmachine: Making call to close driver server
I0908 16:50:29.293425   20954 main.go:141] libmachine: (functional-504207) Calling .Close
I0908 16:50:29.293437   20954 main.go:141] libmachine: (functional-504207) DBG | Closing plugin on server side
I0908 16:50:29.293671   20954 main.go:141] libmachine: (functional-504207) DBG | Closing plugin on server side
I0908 16:50:29.293686   20954 main.go:141] libmachine: Successfully made call to close driver server
I0908 16:50:29.293733   20954 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.940779514s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-504207
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image load --daemon kicbase/echo-server:functional-504207 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-504207 image load --daemon kicbase/echo-server:functional-504207 --alsologtostderr: (1.224535278s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "303.14499ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "46.012862ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "293.24514ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "44.696099ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image load --daemon kicbase/echo-server:functional-504207 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-504207
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image load --daemon kicbase/echo-server:functional-504207 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image save kicbase/echo-server:functional-504207 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-504207 image save kicbase/echo-server:functional-504207 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (8.019414286s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image rm kicbase/echo-server:functional-504207 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-504207
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 image save --daemon kicbase/echo-server:functional-504207 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-504207
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-504207 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-504207 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-xnj7c" [7ee11a30-dbbe-403c-a280-60d850aa558f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-xnj7c" [7ee11a30-dbbe-403c-a280-60d850aa558f] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.00638232s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-504207 /tmp/TestFunctionalparallelMountCmdany-port2258638809/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757350211412436049" to /tmp/TestFunctionalparallelMountCmdany-port2258638809/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757350211412436049" to /tmp/TestFunctionalparallelMountCmdany-port2258638809/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757350211412436049" to /tmp/TestFunctionalparallelMountCmdany-port2258638809/001/test-1757350211412436049
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504207 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.711631ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 16:50:11.667491   11781 retry.go:31] will retry after 673.777004ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 16:50 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 16:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 16:50 test-1757350211412436049
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh cat /mount-9p/test-1757350211412436049
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-504207 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [b207c2de-5d07-4b33-bac7-0ced5f296f6c] Pending
helpers_test.go:352: "busybox-mount" [b207c2de-5d07-4b33-bac7-0ced5f296f6c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [b207c2de-5d07-4b33-bac7-0ced5f296f6c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [b207c2de-5d07-4b33-bac7-0ced5f296f6c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004780137s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-504207 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-504207 /tmp/TestFunctionalparallelMountCmdany-port2258638809/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.88s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-504207 service list: (1.248693901s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-504207 service list -o json: (1.268086926s)
functional_test.go:1504: Took "1.268187716s" to run "out/minikube-linux-amd64 -p functional-504207 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.246:30872
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.246:30872
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-504207 /tmp/TestFunctionalparallelMountCmdspecific-port4196804358/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504207 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (250.591461ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 16:50:22.539870   11781 retry.go:31] will retry after 374.622998ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-504207 /tmp/TestFunctionalparallelMountCmdspecific-port4196804358/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504207 ssh "sudo umount -f /mount-9p": exit status 1 (225.757486ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-504207 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-504207 /tmp/TestFunctionalparallelMountCmdspecific-port4196804358/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-504207 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2785249662/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-504207 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2785249662/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-504207 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2785249662/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-504207 ssh "findmnt -T" /mount1: exit status 1 (322.822659ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 16:50:24.366787   11781 retry.go:31] will retry after 431.95804ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-504207 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-504207 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-504207 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2785249662/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-504207 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2785249662/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-504207 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2785249662/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-504207
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-504207
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-504207
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (245.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0908 16:50:54.297908   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:51:22.001588   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-864336 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (4m4.288822785s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (245.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-864336 kubectl -- rollout status deployment/busybox: (5.349337987s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-65wh2 -- nslookup kubernetes.io
E0908 16:54:48.238103   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:54:48.244564   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:54:48.256021   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:54:48.277475   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-h7td6 -- nslookup kubernetes.io
E0908 16:54:48.319225   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:54:48.400693   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-njxdr -- nslookup kubernetes.io
E0908 16:54:48.562278   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-65wh2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-h7td6 -- nslookup kubernetes.default
E0908 16:54:48.884003   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-njxdr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-65wh2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-h7td6 -- nslookup kubernetes.default.svc.cluster.local
E0908 16:54:49.526140   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-njxdr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-65wh2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-65wh2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-h7td6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-h7td6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-njxdr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0908 16:54:50.808130   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 kubectl -- exec busybox-7b57f96db7-njxdr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 node add --alsologtostderr -v 5
E0908 16:54:53.369801   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:54:58.491876   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:55:08.733523   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:55:29.215743   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-864336 node add --alsologtostderr -v 5: (57.313437171s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-864336 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp testdata/cp-test.txt ha-864336:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2130771923/001/cp-test_ha-864336.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336:/home/docker/cp-test.txt ha-864336-m02:/home/docker/cp-test_ha-864336_ha-864336-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m02 "sudo cat /home/docker/cp-test_ha-864336_ha-864336-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336:/home/docker/cp-test.txt ha-864336-m03:/home/docker/cp-test_ha-864336_ha-864336-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m03 "sudo cat /home/docker/cp-test_ha-864336_ha-864336-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336:/home/docker/cp-test.txt ha-864336-m04:/home/docker/cp-test_ha-864336_ha-864336-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m04 "sudo cat /home/docker/cp-test_ha-864336_ha-864336-m04.txt"
E0908 16:55:54.296741   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp testdata/cp-test.txt ha-864336-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2130771923/001/cp-test_ha-864336-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336-m02:/home/docker/cp-test.txt ha-864336:/home/docker/cp-test_ha-864336-m02_ha-864336.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336 "sudo cat /home/docker/cp-test_ha-864336-m02_ha-864336.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336-m02:/home/docker/cp-test.txt ha-864336-m03:/home/docker/cp-test_ha-864336-m02_ha-864336-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m03 "sudo cat /home/docker/cp-test_ha-864336-m02_ha-864336-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336-m02:/home/docker/cp-test.txt ha-864336-m04:/home/docker/cp-test_ha-864336-m02_ha-864336-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m04 "sudo cat /home/docker/cp-test_ha-864336-m02_ha-864336-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp testdata/cp-test.txt ha-864336-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2130771923/001/cp-test_ha-864336-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336-m03:/home/docker/cp-test.txt ha-864336:/home/docker/cp-test_ha-864336-m03_ha-864336.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336 "sudo cat /home/docker/cp-test_ha-864336-m03_ha-864336.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336-m03:/home/docker/cp-test.txt ha-864336-m02:/home/docker/cp-test_ha-864336-m03_ha-864336-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m02 "sudo cat /home/docker/cp-test_ha-864336-m03_ha-864336-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336-m03:/home/docker/cp-test.txt ha-864336-m04:/home/docker/cp-test_ha-864336-m03_ha-864336-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m04 "sudo cat /home/docker/cp-test_ha-864336-m03_ha-864336-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp testdata/cp-test.txt ha-864336-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2130771923/001/cp-test_ha-864336-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336-m04:/home/docker/cp-test.txt ha-864336:/home/docker/cp-test_ha-864336-m04_ha-864336.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336 "sudo cat /home/docker/cp-test_ha-864336-m04_ha-864336.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336-m04:/home/docker/cp-test.txt ha-864336-m02:/home/docker/cp-test_ha-864336-m04_ha-864336-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m02 "sudo cat /home/docker/cp-test_ha-864336-m04_ha-864336-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 cp ha-864336-m04:/home/docker/cp-test.txt ha-864336-m03:/home/docker/cp-test_ha-864336-m04_ha-864336-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 ssh -n ha-864336-m03 "sudo cat /home/docker/cp-test_ha-864336-m04_ha-864336-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 node stop m02 --alsologtostderr -v 5
E0908 16:56:10.177799   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 16:57:32.099175   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-864336 node stop m02 --alsologtostderr -v 5: (1m30.99724215s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-864336 status --alsologtostderr -v 5: exit status 7 (720.590783ms)

                                                
                                                
-- stdout --
	ha-864336
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-864336-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-864336-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-864336-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 16:57:34.993017   26048 out.go:360] Setting OutFile to fd 1 ...
	I0908 16:57:34.993111   26048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:57:34.993115   26048 out.go:374] Setting ErrFile to fd 2...
	I0908 16:57:34.993119   26048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 16:57:34.993321   26048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
	I0908 16:57:34.993494   26048 out.go:368] Setting JSON to false
	I0908 16:57:34.993528   26048 mustload.go:65] Loading cluster: ha-864336
	I0908 16:57:34.993617   26048 notify.go:220] Checking for updates...
	I0908 16:57:34.993916   26048 config.go:182] Loaded profile config "ha-864336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 16:57:34.993936   26048 status.go:174] checking status of ha-864336 ...
	I0908 16:57:34.994386   26048 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:57:34.994427   26048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:57:35.016882   26048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34211
	I0908 16:57:35.017379   26048 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:57:35.017833   26048 main.go:141] libmachine: Using API Version  1
	I0908 16:57:35.017855   26048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:57:35.018252   26048 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:57:35.018479   26048 main.go:141] libmachine: (ha-864336) Calling .GetState
	I0908 16:57:35.020391   26048 status.go:371] ha-864336 host status = "Running" (err=<nil>)
	I0908 16:57:35.020409   26048 host.go:66] Checking if "ha-864336" exists ...
	I0908 16:57:35.020760   26048 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:57:35.020807   26048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:57:35.036743   26048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0908 16:57:35.037231   26048 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:57:35.037689   26048 main.go:141] libmachine: Using API Version  1
	I0908 16:57:35.037715   26048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:57:35.038045   26048 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:57:35.038313   26048 main.go:141] libmachine: (ha-864336) Calling .GetIP
	I0908 16:57:35.041649   26048 main.go:141] libmachine: (ha-864336) DBG | domain ha-864336 has defined MAC address 52:54:00:c4:73:d5 in network mk-ha-864336
	I0908 16:57:35.042130   26048 main.go:141] libmachine: (ha-864336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:d5", ip: ""} in network mk-ha-864336: {Iface:virbr1 ExpiryTime:2025-09-08 17:50:53 +0000 UTC Type:0 Mac:52:54:00:c4:73:d5 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-864336 Clientid:01:52:54:00:c4:73:d5}
	I0908 16:57:35.042164   26048 main.go:141] libmachine: (ha-864336) DBG | domain ha-864336 has defined IP address 192.168.39.110 and MAC address 52:54:00:c4:73:d5 in network mk-ha-864336
	I0908 16:57:35.042328   26048 host.go:66] Checking if "ha-864336" exists ...
	I0908 16:57:35.042645   26048 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:57:35.042724   26048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:57:35.057491   26048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
	I0908 16:57:35.057954   26048 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:57:35.058541   26048 main.go:141] libmachine: Using API Version  1
	I0908 16:57:35.058562   26048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:57:35.058880   26048 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:57:35.059065   26048 main.go:141] libmachine: (ha-864336) Calling .DriverName
	I0908 16:57:35.059261   26048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 16:57:35.059307   26048 main.go:141] libmachine: (ha-864336) Calling .GetSSHHostname
	I0908 16:57:35.062344   26048 main.go:141] libmachine: (ha-864336) DBG | domain ha-864336 has defined MAC address 52:54:00:c4:73:d5 in network mk-ha-864336
	I0908 16:57:35.062812   26048 main.go:141] libmachine: (ha-864336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:d5", ip: ""} in network mk-ha-864336: {Iface:virbr1 ExpiryTime:2025-09-08 17:50:53 +0000 UTC Type:0 Mac:52:54:00:c4:73:d5 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-864336 Clientid:01:52:54:00:c4:73:d5}
	I0908 16:57:35.062854   26048 main.go:141] libmachine: (ha-864336) DBG | domain ha-864336 has defined IP address 192.168.39.110 and MAC address 52:54:00:c4:73:d5 in network mk-ha-864336
	I0908 16:57:35.063144   26048 main.go:141] libmachine: (ha-864336) Calling .GetSSHPort
	I0908 16:57:35.063326   26048 main.go:141] libmachine: (ha-864336) Calling .GetSSHKeyPath
	I0908 16:57:35.063454   26048 main.go:141] libmachine: (ha-864336) Calling .GetSSHUsername
	I0908 16:57:35.063577   26048 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/ha-864336/id_rsa Username:docker}
	I0908 16:57:35.159688   26048 ssh_runner.go:195] Run: systemctl --version
	I0908 16:57:35.168255   26048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 16:57:35.187070   26048 kubeconfig.go:125] found "ha-864336" server: "https://192.168.39.254:8443"
	I0908 16:57:35.187105   26048 api_server.go:166] Checking apiserver status ...
	I0908 16:57:35.187146   26048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 16:57:35.209986   26048 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1433/cgroup
	W0908 16:57:35.222106   26048 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1433/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0908 16:57:35.222192   26048 ssh_runner.go:195] Run: ls
	I0908 16:57:35.227553   26048 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0908 16:57:35.235134   26048 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0908 16:57:35.235161   26048 status.go:463] ha-864336 apiserver status = Running (err=<nil>)
	I0908 16:57:35.235171   26048 status.go:176] ha-864336 status: &{Name:ha-864336 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 16:57:35.235193   26048 status.go:174] checking status of ha-864336-m02 ...
	I0908 16:57:35.235633   26048 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:57:35.235684   26048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:57:35.250807   26048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37553
	I0908 16:57:35.251217   26048 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:57:35.251768   26048 main.go:141] libmachine: Using API Version  1
	I0908 16:57:35.251795   26048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:57:35.252124   26048 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:57:35.252324   26048 main.go:141] libmachine: (ha-864336-m02) Calling .GetState
	I0908 16:57:35.254354   26048 status.go:371] ha-864336-m02 host status = "Stopped" (err=<nil>)
	I0908 16:57:35.254375   26048 status.go:384] host is not running, skipping remaining checks
	I0908 16:57:35.254382   26048 status.go:176] ha-864336-m02 status: &{Name:ha-864336-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 16:57:35.254405   26048 status.go:174] checking status of ha-864336-m03 ...
	I0908 16:57:35.254901   26048 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:57:35.254965   26048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:57:35.270193   26048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42299
	I0908 16:57:35.270630   26048 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:57:35.271175   26048 main.go:141] libmachine: Using API Version  1
	I0908 16:57:35.271200   26048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:57:35.271559   26048 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:57:35.271767   26048 main.go:141] libmachine: (ha-864336-m03) Calling .GetState
	I0908 16:57:35.273499   26048 status.go:371] ha-864336-m03 host status = "Running" (err=<nil>)
	I0908 16:57:35.273513   26048 host.go:66] Checking if "ha-864336-m03" exists ...
	I0908 16:57:35.273824   26048 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:57:35.273868   26048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:57:35.289838   26048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0908 16:57:35.290361   26048 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:57:35.290909   26048 main.go:141] libmachine: Using API Version  1
	I0908 16:57:35.290934   26048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:57:35.291213   26048 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:57:35.291404   26048 main.go:141] libmachine: (ha-864336-m03) Calling .GetIP
	I0908 16:57:35.294027   26048 main.go:141] libmachine: (ha-864336-m03) DBG | domain ha-864336-m03 has defined MAC address 52:54:00:cc:40:d8 in network mk-ha-864336
	I0908 16:57:35.294523   26048 main.go:141] libmachine: (ha-864336-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:40:d8", ip: ""} in network mk-ha-864336: {Iface:virbr1 ExpiryTime:2025-09-08 17:53:19 +0000 UTC Type:0 Mac:52:54:00:cc:40:d8 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-864336-m03 Clientid:01:52:54:00:cc:40:d8}
	I0908 16:57:35.294556   26048 main.go:141] libmachine: (ha-864336-m03) DBG | domain ha-864336-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:cc:40:d8 in network mk-ha-864336
	I0908 16:57:35.294758   26048 host.go:66] Checking if "ha-864336-m03" exists ...
	I0908 16:57:35.295120   26048 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:57:35.295156   26048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:57:35.310734   26048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39317
	I0908 16:57:35.311345   26048 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:57:35.311765   26048 main.go:141] libmachine: Using API Version  1
	I0908 16:57:35.311780   26048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:57:35.312109   26048 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:57:35.312323   26048 main.go:141] libmachine: (ha-864336-m03) Calling .DriverName
	I0908 16:57:35.312495   26048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 16:57:35.312513   26048 main.go:141] libmachine: (ha-864336-m03) Calling .GetSSHHostname
	I0908 16:57:35.315550   26048 main.go:141] libmachine: (ha-864336-m03) DBG | domain ha-864336-m03 has defined MAC address 52:54:00:cc:40:d8 in network mk-ha-864336
	I0908 16:57:35.316107   26048 main.go:141] libmachine: (ha-864336-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:40:d8", ip: ""} in network mk-ha-864336: {Iface:virbr1 ExpiryTime:2025-09-08 17:53:19 +0000 UTC Type:0 Mac:52:54:00:cc:40:d8 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-864336-m03 Clientid:01:52:54:00:cc:40:d8}
	I0908 16:57:35.316135   26048 main.go:141] libmachine: (ha-864336-m03) DBG | domain ha-864336-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:cc:40:d8 in network mk-ha-864336
	I0908 16:57:35.316395   26048 main.go:141] libmachine: (ha-864336-m03) Calling .GetSSHPort
	I0908 16:57:35.316616   26048 main.go:141] libmachine: (ha-864336-m03) Calling .GetSSHKeyPath
	I0908 16:57:35.316803   26048 main.go:141] libmachine: (ha-864336-m03) Calling .GetSSHUsername
	I0908 16:57:35.316975   26048 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/ha-864336-m03/id_rsa Username:docker}
	I0908 16:57:35.409502   26048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 16:57:35.440097   26048 kubeconfig.go:125] found "ha-864336" server: "https://192.168.39.254:8443"
	I0908 16:57:35.440132   26048 api_server.go:166] Checking apiserver status ...
	I0908 16:57:35.440175   26048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 16:57:35.463076   26048 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1814/cgroup
	W0908 16:57:35.476404   26048 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1814/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0908 16:57:35.476467   26048 ssh_runner.go:195] Run: ls
	I0908 16:57:35.483522   26048 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0908 16:57:35.488330   26048 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0908 16:57:35.488357   26048 status.go:463] ha-864336-m03 apiserver status = Running (err=<nil>)
	I0908 16:57:35.488366   26048 status.go:176] ha-864336-m03 status: &{Name:ha-864336-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 16:57:35.488384   26048 status.go:174] checking status of ha-864336-m04 ...
	I0908 16:57:35.488677   26048 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:57:35.488726   26048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:57:35.503795   26048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0908 16:57:35.504268   26048 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:57:35.504879   26048 main.go:141] libmachine: Using API Version  1
	I0908 16:57:35.504902   26048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:57:35.505209   26048 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:57:35.505407   26048 main.go:141] libmachine: (ha-864336-m04) Calling .GetState
	I0908 16:57:35.507036   26048 status.go:371] ha-864336-m04 host status = "Running" (err=<nil>)
	I0908 16:57:35.507050   26048 host.go:66] Checking if "ha-864336-m04" exists ...
	I0908 16:57:35.507327   26048 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:57:35.507365   26048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:57:35.522740   26048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45125
	I0908 16:57:35.523137   26048 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:57:35.523544   26048 main.go:141] libmachine: Using API Version  1
	I0908 16:57:35.523563   26048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:57:35.523875   26048 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:57:35.524077   26048 main.go:141] libmachine: (ha-864336-m04) Calling .GetIP
	I0908 16:57:35.526888   26048 main.go:141] libmachine: (ha-864336-m04) DBG | domain ha-864336-m04 has defined MAC address 52:54:00:23:1c:f1 in network mk-ha-864336
	I0908 16:57:35.527253   26048 main.go:141] libmachine: (ha-864336-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1c:f1", ip: ""} in network mk-ha-864336: {Iface:virbr1 ExpiryTime:2025-09-08 17:55:08 +0000 UTC Type:0 Mac:52:54:00:23:1c:f1 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-864336-m04 Clientid:01:52:54:00:23:1c:f1}
	I0908 16:57:35.527278   26048 main.go:141] libmachine: (ha-864336-m04) DBG | domain ha-864336-m04 has defined IP address 192.168.39.132 and MAC address 52:54:00:23:1c:f1 in network mk-ha-864336
	I0908 16:57:35.527421   26048 host.go:66] Checking if "ha-864336-m04" exists ...
	I0908 16:57:35.527795   26048 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 16:57:35.527833   26048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 16:57:35.543337   26048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42093
	I0908 16:57:35.543893   26048 main.go:141] libmachine: () Calling .GetVersion
	I0908 16:57:35.544385   26048 main.go:141] libmachine: Using API Version  1
	I0908 16:57:35.544408   26048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 16:57:35.544722   26048 main.go:141] libmachine: () Calling .GetMachineName
	I0908 16:57:35.544892   26048 main.go:141] libmachine: (ha-864336-m04) Calling .DriverName
	I0908 16:57:35.545060   26048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 16:57:35.545079   26048 main.go:141] libmachine: (ha-864336-m04) Calling .GetSSHHostname
	I0908 16:57:35.547847   26048 main.go:141] libmachine: (ha-864336-m04) DBG | domain ha-864336-m04 has defined MAC address 52:54:00:23:1c:f1 in network mk-ha-864336
	I0908 16:57:35.548249   26048 main.go:141] libmachine: (ha-864336-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:1c:f1", ip: ""} in network mk-ha-864336: {Iface:virbr1 ExpiryTime:2025-09-08 17:55:08 +0000 UTC Type:0 Mac:52:54:00:23:1c:f1 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-864336-m04 Clientid:01:52:54:00:23:1c:f1}
	I0908 16:57:35.548277   26048 main.go:141] libmachine: (ha-864336-m04) DBG | domain ha-864336-m04 has defined IP address 192.168.39.132 and MAC address 52:54:00:23:1c:f1 in network mk-ha-864336
	I0908 16:57:35.548550   26048 main.go:141] libmachine: (ha-864336-m04) Calling .GetSSHPort
	I0908 16:57:35.548755   26048 main.go:141] libmachine: (ha-864336-m04) Calling .GetSSHKeyPath
	I0908 16:57:35.548922   26048 main.go:141] libmachine: (ha-864336-m04) Calling .GetSSHUsername
	I0908 16:57:35.549034   26048 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/ha-864336-m04/id_rsa Username:docker}
	I0908 16:57:35.645025   26048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 16:57:35.664549   26048 status.go:176] ha-864336-m04 status: &{Name:ha-864336-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-864336 node start m02 --alsologtostderr -v 5: (37.146906805s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-864336 status --alsologtostderr -v 5: (1.038088673s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (38.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.179193577s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (410.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 stop --alsologtostderr -v 5
E0908 16:59:48.238609   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:00:15.941464   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:00:54.297266   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:02:17.363770   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-864336 stop --alsologtostderr -v 5: (4m34.999479676s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 start --wait true --alsologtostderr -v 5
E0908 17:04:48.238811   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-864336 start --wait true --alsologtostderr -v 5: (2m15.468103508s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (410.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-864336 node delete m03 --alsologtostderr -v 5: (17.832651196s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 stop --alsologtostderr -v 5
E0908 17:05:54.296864   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:09:48.238496   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-864336 stop --alsologtostderr -v 5: (4m32.465033755s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-864336 status --alsologtostderr -v 5: exit status 7 (99.083631ms)

                                                
                                                
-- stdout --
	ha-864336
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-864336-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-864336-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 17:09:58.291747   30054 out.go:360] Setting OutFile to fd 1 ...
	I0908 17:09:58.291852   30054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:09:58.291862   30054 out.go:374] Setting ErrFile to fd 2...
	I0908 17:09:58.291866   30054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:09:58.292065   30054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
	I0908 17:09:58.292221   30054 out.go:368] Setting JSON to false
	I0908 17:09:58.292244   30054 mustload.go:65] Loading cluster: ha-864336
	I0908 17:09:58.292354   30054 notify.go:220] Checking for updates...
	I0908 17:09:58.292610   30054 config.go:182] Loaded profile config "ha-864336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:09:58.292627   30054 status.go:174] checking status of ha-864336 ...
	I0908 17:09:58.293046   30054 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:09:58.293084   30054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:09:58.307810   30054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44809
	I0908 17:09:58.308297   30054 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:09:58.308987   30054 main.go:141] libmachine: Using API Version  1
	I0908 17:09:58.309015   30054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:09:58.309345   30054 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:09:58.309525   30054 main.go:141] libmachine: (ha-864336) Calling .GetState
	I0908 17:09:58.311312   30054 status.go:371] ha-864336 host status = "Stopped" (err=<nil>)
	I0908 17:09:58.311325   30054 status.go:384] host is not running, skipping remaining checks
	I0908 17:09:58.311338   30054 status.go:176] ha-864336 status: &{Name:ha-864336 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 17:09:58.311381   30054 status.go:174] checking status of ha-864336-m02 ...
	I0908 17:09:58.311682   30054 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:09:58.311725   30054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:09:58.326219   30054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36129
	I0908 17:09:58.326729   30054 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:09:58.327272   30054 main.go:141] libmachine: Using API Version  1
	I0908 17:09:58.327331   30054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:09:58.327615   30054 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:09:58.327813   30054 main.go:141] libmachine: (ha-864336-m02) Calling .GetState
	I0908 17:09:58.329239   30054 status.go:371] ha-864336-m02 host status = "Stopped" (err=<nil>)
	I0908 17:09:58.329251   30054 status.go:384] host is not running, skipping remaining checks
	I0908 17:09:58.329255   30054 status.go:176] ha-864336-m02 status: &{Name:ha-864336-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 17:09:58.329269   30054 status.go:174] checking status of ha-864336-m04 ...
	I0908 17:09:58.329548   30054 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:09:58.329578   30054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:09:58.344281   30054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I0908 17:09:58.344673   30054 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:09:58.345047   30054 main.go:141] libmachine: Using API Version  1
	I0908 17:09:58.345067   30054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:09:58.345385   30054 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:09:58.345533   30054 main.go:141] libmachine: (ha-864336-m04) Calling .GetState
	I0908 17:09:58.346884   30054 status.go:371] ha-864336-m04 host status = "Stopped" (err=<nil>)
	I0908 17:09:58.346898   30054 status.go:384] host is not running, skipping remaining checks
	I0908 17:09:58.346904   30054 status.go:176] ha-864336-m04 status: &{Name:ha-864336-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (126.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0908 17:10:54.296865   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:11:11.302996   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-864336 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (2m5.936542929s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (126.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (86.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-864336 node add --control-plane --alsologtostderr -v 5: (1m25.211677196s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-864336 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (86.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.56s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-522212 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E0908 17:14:48.238855   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-522212 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.560779974s)
--- PASS: TestJSONOutput/start/Command (83.56s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.83s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-522212 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.83s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-522212 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-522212 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-522212 --output=json --user=testUser: (7.372511141s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-828083 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-828083 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (60.920842ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dd445610-1da0-4dea-8d33-3ba47a8cded3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-828083] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"537e9eb7-5f7a-41ed-9728-125349171dda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21504"}}
	{"specversion":"1.0","id":"977dbbc3-61e2-4c1b-af19-fbb23a4a4ef9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d3b42e46-c185-462f-a1d5-c27632cd08d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig"}}
	{"specversion":"1.0","id":"0950959c-1ad4-4562-b2c2-fc8283dec712","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube"}}
	{"specversion":"1.0","id":"2b7ee526-7ea9-4f21-90f8-d1c377a586e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"dcad3185-decf-4043-b38a-b5e3da52635d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"01d9ba69-0124-4095-94f1-4a758536dc50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-828083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-828083
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (95.97s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-242561 --driver=kvm2  --container-runtime=crio
E0908 17:15:54.302011   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-242561 --driver=kvm2  --container-runtime=crio: (46.524675082s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-251674 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-251674 --driver=kvm2  --container-runtime=crio: (46.512642382s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-242561
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-251674
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-251674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-251674
helpers_test.go:175: Cleaning up "first-242561" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-242561
--- PASS: TestMinikubeProfile (95.97s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-505338 --memory=3072 --mount-string /tmp/TestMountStartserial922493044/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-505338 --memory=3072 --mount-string /tmp/TestMountStartserial922493044/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.355338676s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-505338 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-505338 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-517850 --memory=3072 --mount-string /tmp/TestMountStartserial922493044/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-517850 --memory=3072 --mount-string /tmp/TestMountStartserial922493044/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.302044279s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-517850 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-517850 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-505338 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-517850 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-517850 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-517850
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-517850: (1.456193448s)
--- PASS: TestMountStart/serial/Stop (1.46s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.07s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-517850
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-517850: (25.074598855s)
--- PASS: TestMountStart/serial/RestartStopped (26.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-517850 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-517850 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-079335 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0908 17:18:57.367409   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:19:48.238532   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-079335 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.650241045s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-079335 -- rollout status deployment/busybox: (3.920414196s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- exec busybox-7b57f96db7-bsrb4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- exec busybox-7b57f96db7-h682c -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- exec busybox-7b57f96db7-bsrb4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- exec busybox-7b57f96db7-h682c -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- exec busybox-7b57f96db7-bsrb4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- exec busybox-7b57f96db7-h682c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- exec busybox-7b57f96db7-bsrb4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- exec busybox-7b57f96db7-bsrb4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- exec busybox-7b57f96db7-h682c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079335 -- exec busybox-7b57f96db7-h682c -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-079335 -v=5 --alsologtostderr
E0908 17:20:54.296759   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-079335 -v=5 --alsologtostderr: (52.913507658s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.50s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-079335 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 cp testdata/cp-test.txt multinode-079335:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 cp multinode-079335:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2722238204/001/cp-test_multinode-079335.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 cp multinode-079335:/home/docker/cp-test.txt multinode-079335-m02:/home/docker/cp-test_multinode-079335_multinode-079335-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335-m02 "sudo cat /home/docker/cp-test_multinode-079335_multinode-079335-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 cp multinode-079335:/home/docker/cp-test.txt multinode-079335-m03:/home/docker/cp-test_multinode-079335_multinode-079335-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335-m03 "sudo cat /home/docker/cp-test_multinode-079335_multinode-079335-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 cp testdata/cp-test.txt multinode-079335-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 cp multinode-079335-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2722238204/001/cp-test_multinode-079335-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 cp multinode-079335-m02:/home/docker/cp-test.txt multinode-079335:/home/docker/cp-test_multinode-079335-m02_multinode-079335.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335 "sudo cat /home/docker/cp-test_multinode-079335-m02_multinode-079335.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 cp multinode-079335-m02:/home/docker/cp-test.txt multinode-079335-m03:/home/docker/cp-test_multinode-079335-m02_multinode-079335-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335-m03 "sudo cat /home/docker/cp-test_multinode-079335-m02_multinode-079335-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 cp testdata/cp-test.txt multinode-079335-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 cp multinode-079335-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2722238204/001/cp-test_multinode-079335-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 cp multinode-079335-m03:/home/docker/cp-test.txt multinode-079335:/home/docker/cp-test_multinode-079335-m03_multinode-079335.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335 "sudo cat /home/docker/cp-test_multinode-079335-m03_multinode-079335.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 cp multinode-079335-m03:/home/docker/cp-test.txt multinode-079335-m02:/home/docker/cp-test_multinode-079335-m03_multinode-079335-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 ssh -n multinode-079335-m02 "sudo cat /home/docker/cp-test_multinode-079335-m03_multinode-079335-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-079335 node stop m03: (2.290238417s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-079335 status: exit status 7 (436.044374ms)

                                                
                                                
-- stdout --
	multinode-079335
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-079335-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-079335-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-079335 status --alsologtostderr: exit status 7 (449.745292ms)

                                                
                                                
-- stdout --
	multinode-079335
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-079335-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-079335-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 17:21:21.640894   38353 out.go:360] Setting OutFile to fd 1 ...
	I0908 17:21:21.641115   38353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:21:21.641123   38353 out.go:374] Setting ErrFile to fd 2...
	I0908 17:21:21.641127   38353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:21:21.641290   38353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
	I0908 17:21:21.641435   38353 out.go:368] Setting JSON to false
	I0908 17:21:21.641461   38353 mustload.go:65] Loading cluster: multinode-079335
	I0908 17:21:21.641505   38353 notify.go:220] Checking for updates...
	I0908 17:21:21.641801   38353 config.go:182] Loaded profile config "multinode-079335": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:21:21.641816   38353 status.go:174] checking status of multinode-079335 ...
	I0908 17:21:21.642206   38353 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:21:21.642244   38353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:21:21.662740   38353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45035
	I0908 17:21:21.663187   38353 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:21:21.663762   38353 main.go:141] libmachine: Using API Version  1
	I0908 17:21:21.663796   38353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:21:21.664178   38353 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:21:21.664408   38353 main.go:141] libmachine: (multinode-079335) Calling .GetState
	I0908 17:21:21.666130   38353 status.go:371] multinode-079335 host status = "Running" (err=<nil>)
	I0908 17:21:21.666151   38353 host.go:66] Checking if "multinode-079335" exists ...
	I0908 17:21:21.666499   38353 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:21:21.666547   38353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:21:21.681941   38353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0908 17:21:21.682327   38353 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:21:21.682729   38353 main.go:141] libmachine: Using API Version  1
	I0908 17:21:21.682747   38353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:21:21.683122   38353 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:21:21.683309   38353 main.go:141] libmachine: (multinode-079335) Calling .GetIP
	I0908 17:21:21.685990   38353 main.go:141] libmachine: (multinode-079335) DBG | domain multinode-079335 has defined MAC address 52:54:00:a4:b8:d2 in network mk-multinode-079335
	I0908 17:21:21.686399   38353 main.go:141] libmachine: (multinode-079335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:b8:d2", ip: ""} in network mk-multinode-079335: {Iface:virbr1 ExpiryTime:2025-09-08 18:18:31 +0000 UTC Type:0 Mac:52:54:00:a4:b8:d2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-079335 Clientid:01:52:54:00:a4:b8:d2}
	I0908 17:21:21.686427   38353 main.go:141] libmachine: (multinode-079335) DBG | domain multinode-079335 has defined IP address 192.168.39.14 and MAC address 52:54:00:a4:b8:d2 in network mk-multinode-079335
	I0908 17:21:21.686584   38353 host.go:66] Checking if "multinode-079335" exists ...
	I0908 17:21:21.686898   38353 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:21:21.686942   38353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:21:21.704628   38353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36525
	I0908 17:21:21.705059   38353 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:21:21.705522   38353 main.go:141] libmachine: Using API Version  1
	I0908 17:21:21.705551   38353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:21:21.705876   38353 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:21:21.706071   38353 main.go:141] libmachine: (multinode-079335) Calling .DriverName
	I0908 17:21:21.706250   38353 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 17:21:21.706285   38353 main.go:141] libmachine: (multinode-079335) Calling .GetSSHHostname
	I0908 17:21:21.709074   38353 main.go:141] libmachine: (multinode-079335) DBG | domain multinode-079335 has defined MAC address 52:54:00:a4:b8:d2 in network mk-multinode-079335
	I0908 17:21:21.709510   38353 main.go:141] libmachine: (multinode-079335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:b8:d2", ip: ""} in network mk-multinode-079335: {Iface:virbr1 ExpiryTime:2025-09-08 18:18:31 +0000 UTC Type:0 Mac:52:54:00:a4:b8:d2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-079335 Clientid:01:52:54:00:a4:b8:d2}
	I0908 17:21:21.709568   38353 main.go:141] libmachine: (multinode-079335) DBG | domain multinode-079335 has defined IP address 192.168.39.14 and MAC address 52:54:00:a4:b8:d2 in network mk-multinode-079335
	I0908 17:21:21.709699   38353 main.go:141] libmachine: (multinode-079335) Calling .GetSSHPort
	I0908 17:21:21.709865   38353 main.go:141] libmachine: (multinode-079335) Calling .GetSSHKeyPath
	I0908 17:21:21.710002   38353 main.go:141] libmachine: (multinode-079335) Calling .GetSSHUsername
	I0908 17:21:21.710135   38353 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/multinode-079335/id_rsa Username:docker}
	I0908 17:21:21.791780   38353 ssh_runner.go:195] Run: systemctl --version
	I0908 17:21:21.799099   38353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 17:21:21.817845   38353 kubeconfig.go:125] found "multinode-079335" server: "https://192.168.39.14:8443"
	I0908 17:21:21.817882   38353 api_server.go:166] Checking apiserver status ...
	I0908 17:21:21.817926   38353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 17:21:21.839110   38353 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1341/cgroup
	W0908 17:21:21.851914   38353 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1341/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0908 17:21:21.851963   38353 ssh_runner.go:195] Run: ls
	I0908 17:21:21.857310   38353 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0908 17:21:21.861906   38353 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I0908 17:21:21.861930   38353 status.go:463] multinode-079335 apiserver status = Running (err=<nil>)
	I0908 17:21:21.861943   38353 status.go:176] multinode-079335 status: &{Name:multinode-079335 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 17:21:21.861959   38353 status.go:174] checking status of multinode-079335-m02 ...
	I0908 17:21:21.862261   38353 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:21:21.862299   38353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:21:21.878438   38353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41625
	I0908 17:21:21.878965   38353 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:21:21.879419   38353 main.go:141] libmachine: Using API Version  1
	I0908 17:21:21.879441   38353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:21:21.879732   38353 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:21:21.879907   38353 main.go:141] libmachine: (multinode-079335-m02) Calling .GetState
	I0908 17:21:21.881544   38353 status.go:371] multinode-079335-m02 host status = "Running" (err=<nil>)
	I0908 17:21:21.881559   38353 host.go:66] Checking if "multinode-079335-m02" exists ...
	I0908 17:21:21.881843   38353 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:21:21.881888   38353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:21:21.897520   38353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46167
	I0908 17:21:21.897985   38353 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:21:21.898435   38353 main.go:141] libmachine: Using API Version  1
	I0908 17:21:21.898463   38353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:21:21.898786   38353 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:21:21.898977   38353 main.go:141] libmachine: (multinode-079335-m02) Calling .GetIP
	I0908 17:21:21.901837   38353 main.go:141] libmachine: (multinode-079335-m02) DBG | domain multinode-079335-m02 has defined MAC address 52:54:00:d0:b2:84 in network mk-multinode-079335
	I0908 17:21:21.902277   38353 main.go:141] libmachine: (multinode-079335-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b2:84", ip: ""} in network mk-multinode-079335: {Iface:virbr1 ExpiryTime:2025-09-08 18:19:32 +0000 UTC Type:0 Mac:52:54:00:d0:b2:84 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-079335-m02 Clientid:01:52:54:00:d0:b2:84}
	I0908 17:21:21.902321   38353 main.go:141] libmachine: (multinode-079335-m02) DBG | domain multinode-079335-m02 has defined IP address 192.168.39.218 and MAC address 52:54:00:d0:b2:84 in network mk-multinode-079335
	I0908 17:21:21.902441   38353 host.go:66] Checking if "multinode-079335-m02" exists ...
	I0908 17:21:21.902750   38353 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:21:21.902787   38353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:21:21.918108   38353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I0908 17:21:21.918518   38353 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:21:21.918993   38353 main.go:141] libmachine: Using API Version  1
	I0908 17:21:21.919020   38353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:21:21.919308   38353 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:21:21.919488   38353 main.go:141] libmachine: (multinode-079335-m02) Calling .DriverName
	I0908 17:21:21.919681   38353 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 17:21:21.919705   38353 main.go:141] libmachine: (multinode-079335-m02) Calling .GetSSHHostname
	I0908 17:21:21.922395   38353 main.go:141] libmachine: (multinode-079335-m02) DBG | domain multinode-079335-m02 has defined MAC address 52:54:00:d0:b2:84 in network mk-multinode-079335
	I0908 17:21:21.922837   38353 main.go:141] libmachine: (multinode-079335-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:b2:84", ip: ""} in network mk-multinode-079335: {Iface:virbr1 ExpiryTime:2025-09-08 18:19:32 +0000 UTC Type:0 Mac:52:54:00:d0:b2:84 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-079335-m02 Clientid:01:52:54:00:d0:b2:84}
	I0908 17:21:21.922876   38353 main.go:141] libmachine: (multinode-079335-m02) DBG | domain multinode-079335-m02 has defined IP address 192.168.39.218 and MAC address 52:54:00:d0:b2:84 in network mk-multinode-079335
	I0908 17:21:21.923004   38353 main.go:141] libmachine: (multinode-079335-m02) Calling .GetSSHPort
	I0908 17:21:21.923138   38353 main.go:141] libmachine: (multinode-079335-m02) Calling .GetSSHKeyPath
	I0908 17:21:21.923224   38353 main.go:141] libmachine: (multinode-079335-m02) Calling .GetSSHUsername
	I0908 17:21:21.923350   38353 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21504-7629/.minikube/machines/multinode-079335-m02/id_rsa Username:docker}
	I0908 17:21:22.011142   38353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 17:21:22.027893   38353 status.go:176] multinode-079335-m02 status: &{Name:multinode-079335-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 17:21:22.027943   38353 status.go:174] checking status of multinode-079335-m03 ...
	I0908 17:21:22.028368   38353 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:21:22.028412   38353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:21:22.043555   38353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37329
	I0908 17:21:22.043971   38353 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:21:22.044404   38353 main.go:141] libmachine: Using API Version  1
	I0908 17:21:22.044423   38353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:21:22.044724   38353 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:21:22.044893   38353 main.go:141] libmachine: (multinode-079335-m03) Calling .GetState
	I0908 17:21:22.046327   38353 status.go:371] multinode-079335-m03 host status = "Stopped" (err=<nil>)
	I0908 17:21:22.046338   38353 status.go:384] host is not running, skipping remaining checks
	I0908 17:21:22.046344   38353 status.go:176] multinode-079335-m03 status: &{Name:multinode-079335-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-079335 node start m03 -v=5 --alsologtostderr: (40.105674856s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (354.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-079335
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-079335
E0908 17:24:48.238915   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-079335: (3m4.165860604s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-079335 --wait=true -v=5 --alsologtostderr
E0908 17:25:54.298141   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:27:51.304612   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-079335 --wait=true -v=5 --alsologtostderr: (2m50.165032114s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-079335
--- PASS: TestMultiNode/serial/RestartKeepsNodes (354.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-079335 node delete m03: (2.277675736s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 stop
E0908 17:29:48.241694   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:30:54.302082   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-079335 stop: (3m1.925032786s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-079335 status: exit status 7 (91.345116ms)

                                                
                                                
-- stdout --
	multinode-079335
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-079335-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-079335 status --alsologtostderr: exit status 7 (82.275404ms)

                                                
                                                
-- stdout --
	multinode-079335
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-079335-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 17:31:02.140986   41304 out.go:360] Setting OutFile to fd 1 ...
	I0908 17:31:02.141221   41304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:31:02.141229   41304 out.go:374] Setting ErrFile to fd 2...
	I0908 17:31:02.141233   41304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:31:02.141438   41304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
	I0908 17:31:02.141586   41304 out.go:368] Setting JSON to false
	I0908 17:31:02.141605   41304 mustload.go:65] Loading cluster: multinode-079335
	I0908 17:31:02.141704   41304 notify.go:220] Checking for updates...
	I0908 17:31:02.141951   41304 config.go:182] Loaded profile config "multinode-079335": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:31:02.141968   41304 status.go:174] checking status of multinode-079335 ...
	I0908 17:31:02.142361   41304 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:31:02.142400   41304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:31:02.158035   41304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0908 17:31:02.158455   41304 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:31:02.159019   41304 main.go:141] libmachine: Using API Version  1
	I0908 17:31:02.159049   41304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:31:02.159360   41304 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:31:02.159541   41304 main.go:141] libmachine: (multinode-079335) Calling .GetState
	I0908 17:31:02.160927   41304 status.go:371] multinode-079335 host status = "Stopped" (err=<nil>)
	I0908 17:31:02.160939   41304 status.go:384] host is not running, skipping remaining checks
	I0908 17:31:02.160944   41304 status.go:176] multinode-079335 status: &{Name:multinode-079335 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 17:31:02.160983   41304 status.go:174] checking status of multinode-079335-m02 ...
	I0908 17:31:02.161268   41304 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21504-7629/.minikube/bin/docker-machine-driver-kvm2
	I0908 17:31:02.161302   41304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 17:31:02.176352   41304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45173
	I0908 17:31:02.176740   41304 main.go:141] libmachine: () Calling .GetVersion
	I0908 17:31:02.177117   41304 main.go:141] libmachine: Using API Version  1
	I0908 17:31:02.177149   41304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 17:31:02.177446   41304 main.go:141] libmachine: () Calling .GetMachineName
	I0908 17:31:02.177606   41304 main.go:141] libmachine: (multinode-079335-m02) Calling .GetState
	I0908 17:31:02.179071   41304 status.go:371] multinode-079335-m02 host status = "Stopped" (err=<nil>)
	I0908 17:31:02.179088   41304 status.go:384] host is not running, skipping remaining checks
	I0908 17:31:02.179095   41304 status.go:176] multinode-079335-m02 status: &{Name:multinode-079335-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (95.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-079335 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-079335 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m35.439807529s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079335 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (95.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-079335
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-079335-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-079335-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.286155ms)

                                                
                                                
-- stdout --
	* [multinode-079335-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-079335-m02' is duplicated with machine name 'multinode-079335-m02' in profile 'multinode-079335'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-079335-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-079335-m03 --driver=kvm2  --container-runtime=crio: (46.973978734s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-079335
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-079335: exit status 80 (224.76648ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-079335 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-079335-m03 already exists in multinode-079335-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-079335-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-079335-m03: (1.016938426s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.33s)

                                                
                                    
x
+
TestScheduledStopUnix (119.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-548762 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-548762 --memory=3072 --driver=kvm2  --container-runtime=crio: (47.431544147s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-548762 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-548762 -n scheduled-stop-548762
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-548762 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0908 17:36:56.083611   11781 retry.go:31] will retry after 134.676µs: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
I0908 17:36:56.084778   11781 retry.go:31] will retry after 105.994µs: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
I0908 17:36:56.085949   11781 retry.go:31] will retry after 266.684µs: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
I0908 17:36:56.087092   11781 retry.go:31] will retry after 359.704µs: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
I0908 17:36:56.088236   11781 retry.go:31] will retry after 646.665µs: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
I0908 17:36:56.089371   11781 retry.go:31] will retry after 526.577µs: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
I0908 17:36:56.090525   11781 retry.go:31] will retry after 590.97µs: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
I0908 17:36:56.091670   11781 retry.go:31] will retry after 1.261932ms: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
I0908 17:36:56.093888   11781 retry.go:31] will retry after 3.189051ms: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
I0908 17:36:56.098084   11781 retry.go:31] will retry after 5.607024ms: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
I0908 17:36:56.104370   11781 retry.go:31] will retry after 8.620448ms: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
I0908 17:36:56.113643   11781 retry.go:31] will retry after 6.698672ms: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
I0908 17:36:56.120903   11781 retry.go:31] will retry after 16.321816ms: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
I0908 17:36:56.138184   11781 retry.go:31] will retry after 23.364454ms: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
I0908 17:36:56.162423   11781 retry.go:31] will retry after 30.99636ms: open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/scheduled-stop-548762/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-548762 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-548762 -n scheduled-stop-548762
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-548762
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-548762 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-548762
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-548762: exit status 7 (64.370949ms)

                                                
                                                
-- stdout --
	scheduled-stop-548762
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-548762 -n scheduled-stop-548762
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-548762 -n scheduled-stop-548762: exit status 7 (65.759107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-548762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-548762
--- PASS: TestScheduledStopUnix (119.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (184.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.317977397 start -p running-upgrade-120349 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.317977397 start -p running-upgrade-120349 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m59.897371096s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-120349 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-120349 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.781403646s)
helpers_test.go:175: Cleaning up "running-upgrade-120349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-120349
--- PASS: TestRunningBinaryUpgrade (184.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (201.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-298230 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-298230 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.353696107s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-298230
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-298230: (2.306288677s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-298230 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-298230 status --format={{.Host}}: exit status 7 (64.000868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-298230 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-298230 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.491100622s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-298230 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-298230 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-298230 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (84.065698ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-298230] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-298230
	    minikube start -p kubernetes-upgrade-298230 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2982302 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-298230 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-298230 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-298230 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.506649707s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-298230" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-298230
--- PASS: TestKubernetesUpgrade (201.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-104185 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-104185 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (88.215662ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-104185] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (99.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-104185 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-104185 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m39.550174559s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-104185 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (99.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-387181 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-387181 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (115.697935ms)

                                                
                                                
-- stdout --
	* [false-387181] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 17:39:16.009689   46264 out.go:360] Setting OutFile to fd 1 ...
	I0908 17:39:16.009896   46264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:39:16.009904   46264 out.go:374] Setting ErrFile to fd 2...
	I0908 17:39:16.009909   46264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 17:39:16.010094   46264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-7629/.minikube/bin
	I0908 17:39:16.010717   46264 out.go:368] Setting JSON to false
	I0908 17:39:16.011751   46264 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4899,"bootTime":1757348257,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 17:39:16.011839   46264 start.go:140] virtualization: kvm guest
	I0908 17:39:16.013674   46264 out.go:179] * [false-387181] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 17:39:16.015211   46264 notify.go:220] Checking for updates...
	I0908 17:39:16.015230   46264 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 17:39:16.016435   46264 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 17:39:16.017802   46264 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-7629/kubeconfig
	I0908 17:39:16.018973   46264 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-7629/.minikube
	I0908 17:39:16.020149   46264 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 17:39:16.021226   46264 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 17:39:16.022944   46264 config.go:182] Loaded profile config "NoKubernetes-104185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:39:16.023084   46264 config.go:182] Loaded profile config "force-systemd-env-113303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 17:39:16.023178   46264 config.go:182] Loaded profile config "running-upgrade-120349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I0908 17:39:16.023262   46264 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 17:39:16.060968   46264 out.go:179] * Using the kvm2 driver based on user configuration
	I0908 17:39:16.062346   46264 start.go:304] selected driver: kvm2
	I0908 17:39:16.062362   46264 start.go:918] validating driver "kvm2" against <nil>
	I0908 17:39:16.062373   46264 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 17:39:16.064355   46264 out.go:203] 
	W0908 17:39:16.065693   46264 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0908 17:39:16.066875   46264 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-387181 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-387181

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-387181

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-387181

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-387181

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-387181

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-387181

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-387181

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-387181

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-387181

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-387181

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-387181

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-387181" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-387181" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-387181

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387181"

                                                
                                                
----------------------- debugLogs end: false-387181 [took: 2.890646393s] --------------------------------
helpers_test.go:175: Cleaning up "false-387181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-387181
--- PASS: TestNetworkPlugins/group/false (3.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (64.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-104185 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0908 17:39:48.238068   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-104185 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m2.840514867s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-104185 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-104185 status -o json: exit status 2 (245.907425ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-104185","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-104185
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-104185: (1.012278954s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (64.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (58.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-104185 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0908 17:40:54.296825   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-104185 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (58.138026s)
--- PASS: TestNoKubernetes/serial/Start (58.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (154.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1656003357 start -p stopped-upgrade-809381 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1656003357 start -p stopped-upgrade-809381 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m13.983714823s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1656003357 -p stopped-upgrade-809381 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1656003357 -p stopped-upgrade-809381 stop: (2.165398705s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-809381 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-809381 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.794299561s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (154.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-104185 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-104185 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.0084ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-104185
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-104185: (1.485977491s)
--- PASS: TestNoKubernetes/serial/Stop (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (70.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-104185 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-104185 --driver=kvm2  --container-runtime=crio: (1m10.524263854s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (70.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-104185 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-104185 "sudo systemctl is-active --quiet service kubelet": exit status 1 (241.16917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestPause/serial/Start (102.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-582402 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-582402 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m42.868942625s)
--- PASS: TestPause/serial/Start (102.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-809381
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-809381: (1.324908314s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0908 17:44:31.308838   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m30.483966395s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (208.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (3m28.427779211s)
--- PASS: TestNetworkPlugins/group/flannel/Start (208.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (140.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m20.270239542s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (140.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-387181 "pgrep -a kubelet"
I0908 17:45:56.458820   11781 config.go:182] Loaded profile config "auto-387181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-387181 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context auto-387181 replace --force -f testdata/netcat-deployment.yaml: (1.524726986s)
I0908 17:45:58.506268   11781 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0908 17:45:58.511755   11781 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fhxv2" [2c804275-76ed-405d-a2a6-3ad3ffca3334] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fhxv2" [2c804275-76ed-405d-a2a6-3ad3ffca3334] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004375316s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-387181 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-387181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-387181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m28.689940366s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m24.313981024s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-387181 "pgrep -a kubelet"
I0908 17:46:56.411444   11781 config.go:182] Loaded profile config "enable-default-cni-387181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-387181 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nxzr8" [91682c2f-44e1-4c38-a8b2-b725f0986fae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nxzr8" [91682c2f-44e1-4c38-a8b2-b725f0986fae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.004831079s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-387181 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-387181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-387181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m10.396038399s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-387181 "pgrep -a kubelet"
I0908 17:47:54.741873   11781 config.go:182] Loaded profile config "bridge-387181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-387181 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9zbx7" [95019b5a-8d6b-4223-ba45-18cdd43de5f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9zbx7" [95019b5a-8d6b-4223-ba45-18cdd43de5f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005810055s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-6tccg" [88549c67-2013-463c-a256-cc29b14f02ba] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004695761s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-387181 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-387181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-387181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-387181 "pgrep -a kubelet"
I0908 17:48:08.325459   11781 config.go:182] Loaded profile config "flannel-387181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-387181 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2b9lc" [29d7d340-ef48-483e-87ad-5a91ae1732a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2b9lc" [29d7d340-ef48-483e-87ad-5a91ae1732a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005974388s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-42fs6" [894d51de-54d0-44a1-8bde-ac407e7ae5e1] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00934007s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-387181 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-387181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-387181 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-387181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
I0908 17:48:20.139418   11781 config.go:182] Loaded profile config "calico-387181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-387181 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xm2dq" [53a3bebc-63f2-4ec3-b53f-00a95dd7340a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xm2dq" [53a3bebc-63f2-4ec3-b53f-00a95dd7340a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005649077s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-387181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.519754695s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-387181 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-387181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-387181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (105.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-016072 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-016072 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m45.746802347s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (105.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-f6b49" [36b3d303-7cd4-4b7e-a917-691a91855840] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004784959s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-387181 "pgrep -a kubelet"
I0908 17:48:44.252575   11781 config.go:182] Loaded profile config "kindnet-387181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-387181 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zlsr6" [2b00c631-a56e-4fd5-8446-51a3c348f33a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zlsr6" [2b00c631-a56e-4fd5-8446-51a3c348f33a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005004075s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (115.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-703091 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-703091 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m55.554514973s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (115.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-387181 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-387181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-387181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (122.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-543134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-543134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (2m2.912464629s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (122.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-387181 "pgrep -a kubelet"
I0908 17:49:45.699885   11781 config.go:182] Loaded profile config "custom-flannel-387181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-387181 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h8bdq" [bb3034c7-a1e2-4f6b-a4f6-5aa06e8c83ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 17:49:48.238797   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-h8bdq" [bb3034c7-a1e2-4f6b-a4f6-5aa06e8c83ec] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004882244s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-387181 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-387181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-387181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)
E0908 17:53:58.528084   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-591652 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-591652 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m6.017567674s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-016072 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0294956e-f058-43b9-9963-1afcecb2e576] Pending
helpers_test.go:352: "busybox" [0294956e-f058-43b9-9963-1afcecb2e576] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0294956e-f058-43b9-9963-1afcecb2e576] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.00412545s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-016072 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-016072 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-016072 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.159791663s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-016072 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-016072 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-016072 --alsologtostderr -v=3: (1m31.089835093s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-703091 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [59ccd18f-38f9-4afe-8bc1-615f3ba1400e] Pending
helpers_test.go:352: "busybox" [59ccd18f-38f9-4afe-8bc1-615f3ba1400e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [59ccd18f-38f9-4afe-8bc1-615f3ba1400e] Running
E0908 17:50:54.297194   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.006128397s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-703091 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-703091 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-703091 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.602373033s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-703091 describe deploy/metrics-server -n kube-system
E0908 17:50:57.986253   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:50:57.992702   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:50:58.004224   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:50:58.025769   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-703091 --alsologtostderr -v=3
E0908 17:50:58.067898   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:50:58.149442   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:50:58.311263   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:50:58.633165   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:50:59.274589   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:51:00.555948   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:51:03.118203   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:51:08.239851   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-703091 --alsologtostderr -v=3: (1m31.067892182s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-543134 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4c9574fb-cc52-4a26-96c7-551b7aec4a7e] Pending
helpers_test.go:352: "busybox" [4c9574fb-cc52-4a26-96c7-551b7aec4a7e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0908 17:51:18.482154   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [4c9574fb-cc52-4a26-96c7-551b7aec4a7e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003497818s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-543134 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-591652 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [25248080-2773-41de-9d1f-4bb4e1866114] Pending
helpers_test.go:352: "busybox" [25248080-2773-41de-9d1f-4bb4e1866114] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [25248080-2773-41de-9d1f-4bb4e1866114] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005625191s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-591652 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-543134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-543134 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-543134 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-543134 --alsologtostderr -v=3: (1m31.72504883s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-591652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-591652 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-591652 --alsologtostderr -v=3
E0908 17:51:38.964294   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:51:56.698804   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:51:56.705193   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:51:56.716612   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:51:56.738122   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:51:56.780095   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:51:56.861668   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:51:57.023285   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:51:57.345058   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:51:57.987052   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:51:59.268718   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:52:01.830738   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:52:06.952145   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-591652 --alsologtostderr -v=3: (1m31.411524481s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-016072 -n old-k8s-version-016072
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-016072 -n old-k8s-version-016072: exit status 7 (65.066663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-016072 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-016072 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E0908 17:52:17.194212   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:52:17.371750   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/addons-198632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:52:19.926460   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-016072 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (49.743867884s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-016072 -n old-k8s-version-016072
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703091 -n no-preload-703091
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703091 -n no-preload-703091: exit status 7 (63.612422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-703091 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (65.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-703091 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 17:52:37.676086   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:52:54.994117   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:52:55.000634   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:52:55.012015   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:52:55.033495   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:52:55.074953   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:52:55.156585   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:52:55.318201   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:52:55.639763   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:52:56.282046   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:52:57.563611   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-703091 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m5.452018114s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-703091 -n no-preload-703091
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (65.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4fwzf" [e27a2433-77d5-49d9-a2c6-30b9375f6c12] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4fwzf" [e27a2433-77d5-49d9-a2c6-30b9375f6c12] Running
E0908 17:53:05.246952   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:07.188503   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.050401697s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-543134 -n embed-certs-543134
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-543134 -n embed-certs-543134: exit status 7 (92.265904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-543134 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-543134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 17:53:00.125025   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:02.056333   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:02.062814   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:02.074390   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:02.095895   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:02.137375   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:02.218843   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:02.380803   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:02.702757   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:03.344655   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-543134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (50.833143102s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-543134 -n embed-certs-543134
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-591652 -n default-k8s-diff-port-591652
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-591652 -n default-k8s-diff-port-591652: exit status 7 (76.329745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-591652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (73.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-591652 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 17:53:04.626476   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-591652 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m13.354798018s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-591652 -n default-k8s-diff-port-591652
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (73.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4fwzf" [e27a2433-77d5-49d9-a2c6-30b9375f6c12] Running
E0908 17:53:12.310392   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:13.843378   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:13.849858   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:13.861289   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:13.882821   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:13.924353   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:14.006483   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:14.168110   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:14.489922   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:15.131593   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:15.488993   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005320296s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-016072 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-016072 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-016072 --alsologtostderr -v=1
E0908 17:53:16.413805   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-016072 -n old-k8s-version-016072
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-016072 -n old-k8s-version-016072: exit status 2 (269.628977ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-016072 -n old-k8s-version-016072
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-016072 -n old-k8s-version-016072: exit status 2 (261.54261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-016072 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-016072 -n old-k8s-version-016072
E0908 17:53:18.638324   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/enable-default-cni-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-016072 -n old-k8s-version-016072
E0908 17:53:18.976164   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (80.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-141916 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 17:53:22.552749   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:24.098058   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:34.339714   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-141916 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m20.708690731s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (80.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xhnc9" [937b0c39-5c00-458b-9a44-c94090f6ebc4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 17:53:35.970416   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/bridge-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:38.030430   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:38.036819   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:38.048374   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:38.070773   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:38.112126   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:38.194344   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:38.356609   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:38.678902   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:39.320628   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:40.602171   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:41.848198   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/auto-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xhnc9" [937b0c39-5c00-458b-9a44-c94090f6ebc4] Running
E0908 17:53:43.034559   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:53:43.164139   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004846917s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xhnc9" [937b0c39-5c00-458b-9a44-c94090f6ebc4] Running
E0908 17:53:48.285793   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004943866s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-703091 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6lck9" [a9890b7f-d415-4ef0-8ca8-c68b6dacbd02] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6lck9" [a9890b7f-d415-4ef0-8ca8-c68b6dacbd02] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.005790949s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-703091 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-703091 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-703091 --alsologtostderr -v=1: (1.734409597s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-703091 -n no-preload-703091
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-703091 -n no-preload-703091: exit status 2 (299.674422ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-703091 -n no-preload-703091
E0908 17:53:54.821405   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-703091 -n no-preload-703091: exit status 2 (284.569369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-703091 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-703091 --alsologtostderr -v=1: (1.068000048s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-703091 -n no-preload-703091
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-703091 -n no-preload-703091
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6lck9" [a9890b7f-d415-4ef0-8ca8-c68b6dacbd02] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004793977s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-543134 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-543134 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-543134 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-543134 -n embed-certs-543134
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-543134 -n embed-certs-543134: exit status 2 (252.330505ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-543134 -n embed-certs-543134
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-543134 -n embed-certs-543134: exit status 2 (254.748271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-543134 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-543134 -n embed-certs-543134
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-543134 -n embed-certs-543134
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hbs9x" [eb669561-c9b1-4d1b-8390-d834a4a99407] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 17:54:19.010333   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hbs9x" [eb669561-c9b1-4d1b-8390-d834a4a99407] Running
E0908 17:54:23.996158   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.003810117s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hbs9x" [eb669561-c9b1-4d1b-8390-d834a4a99407] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004791692s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-591652 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-591652 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-591652 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-591652 --alsologtostderr -v=1: (1.000863721s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-591652 -n default-k8s-diff-port-591652
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-591652 -n default-k8s-diff-port-591652: exit status 2 (261.553075ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-591652 -n default-k8s-diff-port-591652
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-591652 -n default-k8s-diff-port-591652: exit status 2 (255.935039ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-591652 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-591652 -n default-k8s-diff-port-591652
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-591652 -n default-k8s-diff-port-591652
E0908 17:54:35.782713   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/calico-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-141916 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-141916 --alsologtostderr -v=3
E0908 17:54:45.975497   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/custom-flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:54:45.981848   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/custom-flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:54:45.993228   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/custom-flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:54:46.014617   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/custom-flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:54:46.056085   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/custom-flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:54:46.137574   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/custom-flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:54:46.299109   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/custom-flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:54:46.621334   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/custom-flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:54:47.263516   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/custom-flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:54:48.238672   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/functional-504207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:54:48.545012   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/custom-flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:54:51.106427   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/custom-flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-141916 --alsologtostderr -v=3: (10.555021122s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-141916 -n newest-cni-141916
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-141916 -n newest-cni-141916: exit status 7 (63.601202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-141916 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-141916 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 17:54:56.228653   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/custom-flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:54:59.971861   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/kindnet-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:55:06.470713   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/custom-flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:55:23.810176   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/old-k8s-version-016072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:55:23.816582   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/old-k8s-version-016072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:55:23.827998   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/old-k8s-version-016072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:55:23.849743   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/old-k8s-version-016072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:55:23.891208   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/old-k8s-version-016072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:55:23.972820   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/old-k8s-version-016072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:55:24.134639   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/old-k8s-version-016072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:55:24.456817   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/old-k8s-version-016072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:55:25.099143   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/old-k8s-version-016072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:55:26.381515   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/old-k8s-version-016072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:55:26.952996   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/custom-flannel-387181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 17:55:28.943018   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/old-k8s-version-016072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-141916 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (38.810253658s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-141916 -n newest-cni-141916
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-141916 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-141916 --alsologtostderr -v=1
E0908 17:55:34.065066   11781 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/old-k8s-version-016072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-141916 --alsologtostderr -v=1: (1.729149306s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-141916 -n newest-cni-141916
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-141916 -n newest-cni-141916: exit status 2 (290.09165ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-141916 -n newest-cni-141916
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-141916 -n newest-cni-141916: exit status 2 (320.223659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-141916 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-141916 --alsologtostderr -v=1: (1.09278702s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-141916 -n newest-cni-141916
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-141916 -n newest-cni-141916
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.14s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.0/cached-images 0
15 TestDownloadOnly/v1.34.0/binaries 0
16 TestDownloadOnly/v1.34.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.33
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 3.37
267 TestNetworkPlugins/group/cilium 3.33
285 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-198632 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-387181 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-387181

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-387181

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-387181

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-387181

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-387181

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-387181

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-387181

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-387181

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-387181

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-387181

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-387181

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-387181" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-387181" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-387181

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387181"

                                                
                                                
----------------------- debugLogs end: kubenet-387181 [took: 3.216252324s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-387181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-387181
--- SKIP: TestNetworkPlugins/group/kubenet (3.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-387181 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-387181" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21504-7629/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 17:39:19 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.50.38:8443
name: force-systemd-env-113303
contexts:
- context:
cluster: force-systemd-env-113303
extensions:
- extension:
last-update: Mon, 08 Sep 2025 17:39:19 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: force-systemd-env-113303
name: force-systemd-env-113303
current-context: force-systemd-env-113303
kind: Config
preferences: {}
users:
- name: force-systemd-env-113303
user:
client-certificate: /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/force-systemd-env-113303/client.crt
client-key: /home/jenkins/minikube-integration/21504-7629/.minikube/profiles/force-systemd-env-113303/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-387181

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-387181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387181"

                                                
                                                
----------------------- debugLogs end: cilium-387181 [took: 3.194268496s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-387181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-387181
--- SKIP: TestNetworkPlugins/group/cilium (3.33s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-798570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-798570
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard